Dec 13 01:30:18.066723 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:30:18.066762 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:18.066776 kernel: BIOS-provided physical RAM map: Dec 13 01:30:18.066785 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:30:18.066795 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:30:18.066805 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:30:18.066820 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:30:18.066831 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:30:18.066842 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:30:18.066852 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:30:18.066863 kernel: NX (Execute Disable) protection: active Dec 13 01:30:18.066873 kernel: APIC: Static calls initialized Dec 13 01:30:18.066883 kernel: SMBIOS 2.7 present. Dec 13 01:30:18.066894 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:30:18.066909 kernel: Hypervisor detected: KVM Dec 13 01:30:18.066921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:30:18.066933 kernel: kvm-clock: using sched offset of 6438662072 cycles Dec 13 01:30:18.066946 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:30:18.066958 kernel: tsc: Detected 2500.006 MHz processor Dec 13 01:30:18.066971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:30:18.066982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:30:18.066998 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:30:18.067010 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:30:18.067021 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:30:18.067934 kernel: Using GB pages for direct mapping Dec 13 01:30:18.067957 kernel: ACPI: Early table checksum verification disabled Dec 13 01:30:18.067971 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:30:18.067983 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:30:18.067995 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:30:18.068007 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:30:18.068023 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:30:18.073302 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:30:18.073350 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:30:18.073367 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:30:18.073382 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:30:18.073397 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:30:18.073413 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:30:18.073427 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:30:18.073443 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:30:18.073467 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:30:18.073488 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:30:18.073504 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:30:18.073519 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:30:18.073535 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:30:18.073555 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:30:18.073571 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:30:18.073587 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:30:18.073603 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:30:18.073618 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:30:18.073634 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:30:18.073649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:30:18.073662 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:30:18.073676 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:30:18.073696 kernel: Zone ranges: Dec 13 01:30:18.073711 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:30:18.073725 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:30:18.073739 kernel: Normal empty Dec 13 01:30:18.073752 kernel: Movable zone start for each node Dec 13 01:30:18.073766 kernel: Early memory node ranges Dec 13 01:30:18.073779 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:30:18.073793 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:30:18.073808 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:30:18.073822 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:30:18.073839 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:30:18.073855 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:30:18.073870 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:30:18.073883 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:30:18.073907 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:30:18.073923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:30:18.073939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:30:18.073956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:30:18.073972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:30:18.073992 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:30:18.074008 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:30:18.074022 kernel: TSC deadline timer available Dec 13 01:30:18.074078 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:30:18.074092 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:30:18.074105 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:30:18.074119 kernel: Booting paravirtualized kernel on KVM Dec 13 01:30:18.074184 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:30:18.074198 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:30:18.074216 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:30:18.074230 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:30:18.074243 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:30:18.074343 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:30:18.074357 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:30:18.074373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:18.074388 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:30:18.074401 kernel: random: crng init done Dec 13 01:30:18.074418 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:30:18.074432 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:30:18.074447 kernel: Fallback order for Node 0: 0 Dec 13 01:30:18.074461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:30:18.074475 kernel: Policy zone: DMA32 Dec 13 01:30:18.074488 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:30:18.074502 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Dec 13 01:30:18.074516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:30:18.074530 kernel: Kernel/User page tables isolation: enabled Dec 13 01:30:18.074547 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:30:18.074561 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:30:18.074575 kernel: Dynamic Preempt: voluntary Dec 13 01:30:18.074589 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:30:18.074604 kernel: rcu: RCU event tracing is enabled. Dec 13 01:30:18.074618 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:30:18.074632 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:30:18.074646 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:30:18.074661 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:30:18.074678 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:30:18.074692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:30:18.074706 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:30:18.074721 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:30:18.074735 kernel: Console: colour VGA+ 80x25 Dec 13 01:30:18.074749 kernel: printk: console [ttyS0] enabled Dec 13 01:30:18.074763 kernel: ACPI: Core revision 20230628 Dec 13 01:30:18.074778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:30:18.074793 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:30:18.074810 kernel: x2apic enabled Dec 13 01:30:18.074825 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:30:18.074850 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Dec 13 01:30:18.074869 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Dec 13 01:30:18.074884 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:30:18.074900 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:30:18.074915 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:30:18.074930 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:30:18.074945 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:30:18.074959 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:30:18.074975 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:30:18.074990 kernel: RETBleed: Vulnerable Dec 13 01:30:18.075009 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:30:18.075024 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:30:18.075096 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:30:18.075112 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:30:18.075127 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:30:18.075143 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:30:18.075162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:30:18.075177 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:30:18.075193 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:30:18.075208 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:30:18.075224 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:30:18.075239 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:30:18.075255 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:30:18.075269 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:30:18.075283 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:30:18.075298 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:30:18.075312 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:30:18.075331 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:30:18.075346 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:30:18.075360 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:30:18.075375 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:30:18.075390 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:30:18.075403 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:30:18.075417 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:30:18.075433 kernel: landlock: Up and running. Dec 13 01:30:18.075449 kernel: SELinux: Initializing. Dec 13 01:30:18.075463 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:30:18.075479 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:30:18.075493 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:30:18.075513 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:18.075528 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:18.075543 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:30:18.075557 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:30:18.075572 kernel: signal: max sigframe size: 3632 Dec 13 01:30:18.075587 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:30:18.075602 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:30:18.075616 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:30:18.075631 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:30:18.075649 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:30:18.075664 kernel: .... node #0, CPUs: #1 Dec 13 01:30:18.075680 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:30:18.075697 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:30:18.075713 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:30:18.075728 kernel: smpboot: Max logical packages: 1 Dec 13 01:30:18.075745 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Dec 13 01:30:18.075759 kernel: devtmpfs: initialized Dec 13 01:30:18.075778 kernel: x86/mm: Memory block size: 128MB Dec 13 01:30:18.075793 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:30:18.075808 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:30:18.075823 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:30:18.075838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:30:18.075854 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:30:18.075870 kernel: audit: type=2000 audit(1734053416.610:1): state=initialized audit_enabled=0 res=1 Dec 13 01:30:18.075885 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:30:18.075901 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:30:18.075920 kernel: cpuidle: using governor menu Dec 13 01:30:18.075932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:30:18.075946 kernel: dca service started, version 1.12.1 Dec 13 01:30:18.075961 kernel: PCI: Using configuration type 1 for base access Dec 13 01:30:18.075975 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:30:18.075989 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:30:18.076003 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:30:18.076019 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:30:18.076047 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:30:18.076064 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:30:18.076077 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:30:18.076092 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:30:18.076109 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:30:18.076127 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:30:18.076142 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:30:18.076166 kernel: ACPI: Interpreter enabled Dec 13 01:30:18.076178 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:30:18.076192 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:30:18.076212 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:30:18.076229 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:30:18.076246 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:30:18.076263 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:30:18.076486 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:30:18.076631 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:30:18.076765 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:30:18.076861 kernel: acpiphp: Slot [3] registered Dec 13 01:30:18.076881 kernel: acpiphp: Slot [4] registered Dec 13 01:30:18.076898 kernel: acpiphp: Slot [5] registered Dec 13 01:30:18.076915 kernel: acpiphp: Slot [6] registered Dec 13 01:30:18.076932 kernel: acpiphp: Slot [7] registered Dec 13 01:30:18.076948 kernel: acpiphp: Slot [8] registered Dec 13 01:30:18.076965 kernel: acpiphp: Slot [9] registered Dec 13 01:30:18.076982 kernel: acpiphp: Slot [10] registered Dec 13 01:30:18.076998 kernel: acpiphp: Slot [11] registered Dec 13 01:30:18.077015 kernel: acpiphp: Slot [12] registered Dec 13 01:30:18.078092 kernel: acpiphp: Slot [13] registered Dec 13 01:30:18.078112 kernel: acpiphp: Slot [14] registered Dec 13 01:30:18.078126 kernel: acpiphp: Slot [15] registered Dec 13 01:30:18.078147 kernel: acpiphp: Slot [16] registered Dec 13 01:30:18.078164 kernel: acpiphp: Slot [17] registered Dec 13 01:30:18.078181 kernel: acpiphp: Slot [18] registered Dec 13 01:30:18.078198 kernel: acpiphp: Slot [19] registered Dec 13 01:30:18.078214 kernel: acpiphp: Slot [20] registered Dec 13 01:30:18.078231 kernel: acpiphp: Slot [21] registered Dec 13 01:30:18.078253 kernel: acpiphp: Slot [22] registered Dec 13 01:30:18.078270 kernel: acpiphp: Slot [23] registered Dec 13 01:30:18.078286 kernel: acpiphp: Slot [24] registered Dec 13 01:30:18.078303 kernel: acpiphp: Slot [25] registered Dec 13 01:30:18.078320 kernel: acpiphp: Slot [26] registered Dec 13 01:30:18.078336 kernel: acpiphp: Slot [27] registered Dec 13 01:30:18.078354 kernel: acpiphp: Slot [28] registered Dec 13 01:30:18.078371 kernel: acpiphp: Slot [29] registered Dec 13 01:30:18.078387 kernel: acpiphp: Slot [30] registered Dec 13 01:30:18.078404 kernel: acpiphp: Slot [31] registered Dec 13 01:30:18.078424 kernel: PCI host bridge to bus 0000:00 Dec 13 01:30:18.078616 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:30:18.078745 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:30:18.078868 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:30:18.078992 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:30:18.079137 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:30:18.079294 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:30:18.079452 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:30:18.079597 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:30:18.079734 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:30:18.079869 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:30:18.080003 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:30:18.082254 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:30:18.082419 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:30:18.082558 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:30:18.082696 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:30:18.082830 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:30:18.082965 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Dec 13 01:30:18.085177 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:30:18.085333 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:30:18.085481 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:30:18.085616 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:30:18.085759 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:30:18.085902 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:30:18.087093 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:30:18.087269 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:30:18.087292 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:30:18.087315 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:30:18.087332 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:30:18.087349 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:30:18.087366 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:30:18.087383 kernel: iommu: Default domain type: Translated Dec 13 01:30:18.087400 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:30:18.087417 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:30:18.087433 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:30:18.087451 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:30:18.087470 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:30:18.087608 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:30:18.087744 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:30:18.087880 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:30:18.087901 kernel: vgaarb: loaded Dec 13 01:30:18.087918 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:30:18.087935 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:30:18.087952 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:30:18.087972 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:30:18.087989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:30:18.088007 kernel: pnp: PnP ACPI init Dec 13 01:30:18.088024 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:30:18.088052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:30:18.088069 kernel: NET: Registered PF_INET protocol family Dec 13 01:30:18.088085 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:30:18.088103 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:30:18.088120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:30:18.088140 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:30:18.088157 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:30:18.088173 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:30:18.088190 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:30:18.088207 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:30:18.088223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:30:18.088240 kernel: NET: Registered PF_XDP protocol family Dec 13 01:30:18.088369 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:30:18.088495 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:30:18.088616 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:30:18.088738 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:30:18.088955 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:30:18.088980 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:30:18.088998 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:30:18.089015 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Dec 13 01:30:18.089032 kernel: clocksource: Switched to clocksource tsc Dec 13 01:30:18.091165 kernel: Initialise system trusted keyrings Dec 13 01:30:18.091190 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:30:18.091208 kernel: Key type asymmetric registered Dec 13 01:30:18.091225 kernel: Asymmetric key parser 'x509' registered Dec 13 01:30:18.091241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:30:18.091258 kernel: io scheduler mq-deadline registered Dec 13 01:30:18.091275 kernel: io scheduler kyber registered Dec 13 01:30:18.091292 kernel: io scheduler bfq registered Dec 13 01:30:18.091309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:30:18.091326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:30:18.091345 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:30:18.091362 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:30:18.091379 kernel: i8042: Warning: Keylock active Dec 13 01:30:18.091396 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:30:18.091412 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:30:18.091584 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:30:18.091715 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:30:18.091841 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:30:17 UTC (1734053417) Dec 13 01:30:18.091970 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:30:18.091991 kernel: intel_pstate: CPU model not supported Dec 13 01:30:18.092008 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:30:18.092025 kernel: Segment Routing with IPv6 Dec 13 01:30:18.092888 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:30:18.092909 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:30:18.092927 kernel: Key type dns_resolver registered Dec 13 01:30:18.092944 kernel: IPI shorthand broadcast: enabled Dec 13 01:30:18.092961 kernel: sched_clock: Marking stable (735003413, 323330245)->(1159969115, -101635457) Dec 13 01:30:18.092982 kernel: registered taskstats version 1 Dec 13 01:30:18.092999 kernel: Loading compiled-in X.509 certificates Dec 13 01:30:18.093016 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:30:18.093032 kernel: Key type .fscrypt registered Dec 13 01:30:18.094075 kernel: Key type fscrypt-provisioning registered Dec 13 01:30:18.094093 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:30:18.094110 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:30:18.094127 kernel: ima: No architecture policies found Dec 13 01:30:18.094148 kernel: clk: Disabling unused clocks Dec 13 01:30:18.094165 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:30:18.094181 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:30:18.094198 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:30:18.094216 kernel: Run /init as init process Dec 13 01:30:18.094233 kernel: with arguments: Dec 13 01:30:18.094249 kernel: /init Dec 13 01:30:18.094265 kernel: with environment: Dec 13 01:30:18.094281 kernel: HOME=/ Dec 13 01:30:18.094297 kernel: TERM=linux Dec 13 01:30:18.094317 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:30:18.094363 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:30:18.094385 systemd[1]: Detected virtualization amazon. Dec 13 01:30:18.094403 systemd[1]: Detected architecture x86-64. Dec 13 01:30:18.094421 systemd[1]: Running in initrd. Dec 13 01:30:18.094438 systemd[1]: No hostname configured, using default hostname. Dec 13 01:30:18.094459 systemd[1]: Hostname set to . Dec 13 01:30:18.094477 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:30:18.094495 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:30:18.094514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:18.094532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:18.094553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:30:18.094571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:30:18.094590 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:30:18.094611 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:30:18.094632 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:30:18.094651 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:30:18.094670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:18.094689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:18.094707 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:18.094725 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:30:18.094747 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:30:18.094765 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:18.094783 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:18.094801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:18.094820 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:30:18.094838 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:30:18.094856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:18.094874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:18.094893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:18.094914 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:18.094932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:30:18.094951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:30:18.094969 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:30:18.094987 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:30:18.095011 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:30:18.095030 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:30:18.096144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:30:18.096203 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:30:18.096250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:18.096269 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:18.096289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:18.096308 systemd-journald[178]: Journal started Dec 13 01:30:18.096346 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2815a38088a322bda5a6e509cfdf4e) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:30:18.086435 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:30:18.101345 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:30:18.100873 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:30:18.116230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:30:18.131250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:18.250129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:30:18.250166 kernel: Bridge firewalling registered Dec 13 01:30:18.143234 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:30:18.251756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:18.256177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:18.266498 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:30:18.279217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:18.291921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:18.299162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:30:18.299653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:18.323889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:18.346357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:18.353242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:18.377678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:18.388566 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:30:18.412590 dracut-cmdline[213]: dracut-dracut-053 Dec 13 01:30:18.422918 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:30:18.444317 systemd-resolved[206]: Positive Trust Anchors: Dec 13 01:30:18.444336 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:18.444398 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:18.460324 systemd-resolved[206]: Defaulting to hostname 'linux'. Dec 13 01:30:18.463138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:18.465955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:18.534064 kernel: SCSI subsystem initialized Dec 13 01:30:18.545068 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:30:18.559062 kernel: iscsi: registered transport (tcp) Dec 13 01:30:18.585070 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:30:18.585153 kernel: QLogic iSCSI HBA Driver Dec 13 01:30:18.630886 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:18.637276 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:30:18.671206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:30:18.671296 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:30:18.671321 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:30:18.716093 kernel: raid6: avx512x4 gen() 16429 MB/s Dec 13 01:30:18.733093 kernel: raid6: avx512x2 gen() 13450 MB/s Dec 13 01:30:18.750083 kernel: raid6: avx512x1 gen() 14702 MB/s Dec 13 01:30:18.767093 kernel: raid6: avx2x4 gen() 15086 MB/s Dec 13 01:30:18.784082 kernel: raid6: avx2x2 gen() 15752 MB/s Dec 13 01:30:18.801081 kernel: raid6: avx2x1 gen() 11950 MB/s Dec 13 01:30:18.801174 kernel: raid6: using algorithm avx512x4 gen() 16429 MB/s Dec 13 01:30:18.819070 kernel: raid6: .... xor() 6921 MB/s, rmw enabled Dec 13 01:30:18.819154 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:30:18.844067 kernel: xor: automatically using best checksumming function avx Dec 13 01:30:19.036069 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:30:19.049726 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:19.057269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:19.074558 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 01:30:19.080567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:19.100132 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:30:19.127563 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 01:30:19.168840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:19.176291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:30:19.249444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:19.261263 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:30:19.347842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:19.361595 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:19.363755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:19.371592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:30:19.386508 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:30:19.436555 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:30:19.495550 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:30:19.495754 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:30:19.495921 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:02:21:e2:31:0b Dec 13 01:30:19.496115 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:30:19.461008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:19.507088 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:19.548563 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:30:19.548640 kernel: AES CTR mode by8 optimization enabled Dec 13 01:30:19.555838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:19.556319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:19.556871 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:19.556976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:19.557320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:19.557692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:19.577539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:19.593516 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:30:19.593776 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:30:19.606136 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:30:19.610728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:30:19.610800 kernel: GPT:9289727 != 16777215 Dec 13 01:30:19.610826 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:30:19.610844 kernel: GPT:9289727 != 16777215 Dec 13 01:30:19.610861 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:30:19.610878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:30:19.743057 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (455) Dec 13 01:30:19.758102 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (445) Dec 13 01:30:19.777793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:19.787322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:30:19.829626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:19.858366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:30:19.881279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:30:19.883310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:30:19.903122 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:30:19.919698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:30:19.932424 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:30:19.942883 disk-uuid[632]: Primary Header is updated. Dec 13 01:30:19.942883 disk-uuid[632]: Secondary Entries is updated. Dec 13 01:30:19.942883 disk-uuid[632]: Secondary Header is updated. Dec 13 01:30:19.950113 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:30:19.959118 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:30:20.965600 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:30:20.966945 disk-uuid[633]: The operation has completed successfully. Dec 13 01:30:21.184346 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:30:21.184483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:30:21.226255 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:30:21.248433 sh[891]: Success Dec 13 01:30:21.280152 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:30:21.394735 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:30:21.413249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:30:21.417272 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:30:21.454885 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:30:21.454948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:21.454968 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:30:21.454986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:30:21.456607 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:30:21.589064 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:30:21.592057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:30:21.595023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:30:21.602246 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:30:21.609416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:30:21.632304 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:21.632507 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:21.632533 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:30:21.639878 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:30:21.653062 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:21.653397 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:30:21.672826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:30:21.682232 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:30:21.771459 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:21.780298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:21.807118 systemd-networkd[1083]: lo: Link UP Dec 13 01:30:21.807129 systemd-networkd[1083]: lo: Gained carrier Dec 13 01:30:21.809875 systemd-networkd[1083]: Enumeration completed Dec 13 01:30:21.810007 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:21.811316 systemd[1]: Reached target network.target - Network. Dec 13 01:30:21.815197 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:21.815208 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:21.821252 systemd-networkd[1083]: eth0: Link UP Dec 13 01:30:21.821258 systemd-networkd[1083]: eth0: Gained carrier Dec 13 01:30:21.821268 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:21.843137 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.22.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:30:22.208946 ignition[1028]: Ignition 2.19.0 Dec 13 01:30:22.208959 ignition[1028]: Stage: fetch-offline Dec 13 01:30:22.209249 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:22.209264 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:22.212114 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:22.210208 ignition[1028]: Ignition finished successfully Dec 13 01:30:22.223258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:30:22.240280 ignition[1091]: Ignition 2.19.0 Dec 13 01:30:22.240291 ignition[1091]: Stage: fetch Dec 13 01:30:22.240637 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:22.240651 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:22.240733 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:22.272628 ignition[1091]: PUT result: OK Dec 13 01:30:22.275826 ignition[1091]: parsed url from cmdline: "" Dec 13 01:30:22.275837 ignition[1091]: no config URL provided Dec 13 01:30:22.275847 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:30:22.275862 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:30:22.275885 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:22.277198 ignition[1091]: PUT result: OK Dec 13 01:30:22.277442 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:30:22.281198 ignition[1091]: GET result: OK Dec 13 01:30:22.281252 ignition[1091]: parsing config with SHA512: 0c9fcb7a92891c1e2c80614955fbe12443ccc5b727c31886067996fd178fc0b223f0709e48fa6470d72a8131ac427ae6f18180b1cc222c9814b67a43dd89f2fe Dec 13 01:30:22.292094 unknown[1091]: fetched base config from "system" Dec 13 01:30:22.292110 unknown[1091]: fetched base config from "system" Dec 13 01:30:22.293348 ignition[1091]: fetch: fetch complete Dec 13 01:30:22.292117 unknown[1091]: fetched user config from "aws" Dec 13 01:30:22.293355 ignition[1091]: fetch: fetch passed Dec 13 01:30:22.293417 ignition[1091]: Ignition finished successfully Dec 13 01:30:22.299206 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:30:22.305221 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:30:22.345517 ignition[1097]: Ignition 2.19.0 Dec 13 01:30:22.345532 ignition[1097]: Stage: kargs Dec 13 01:30:22.346634 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:22.346650 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:22.348231 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:22.349087 ignition[1097]: PUT result: OK Dec 13 01:30:22.355923 ignition[1097]: kargs: kargs passed Dec 13 01:30:22.356099 ignition[1097]: Ignition finished successfully Dec 13 01:30:22.359011 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:30:22.370287 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:30:22.387327 ignition[1103]: Ignition 2.19.0 Dec 13 01:30:22.387339 ignition[1103]: Stage: disks Dec 13 01:30:22.387689 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:22.387698 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:22.387775 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:22.389339 ignition[1103]: PUT result: OK Dec 13 01:30:22.395171 ignition[1103]: disks: disks passed Dec 13 01:30:22.395232 ignition[1103]: Ignition finished successfully Dec 13 01:30:22.398299 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:30:22.398561 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:22.401468 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:30:22.403792 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:30:22.406372 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:22.406441 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:22.419229 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:30:22.463867 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:30:22.472008 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:30:22.479178 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:30:22.599325 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:30:22.600130 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:30:22.602548 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:30:22.619301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:22.624340 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:30:22.627683 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:30:22.627756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:30:22.627782 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:22.654413 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1130) Dec 13 01:30:22.652719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:30:22.658860 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:22.658929 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:22.658949 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:30:22.662322 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:30:22.670078 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:30:22.672363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:23.184059 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:30:23.205816 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:30:23.215778 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:30:23.224281 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:30:23.540375 systemd-networkd[1083]: eth0: Gained IPv6LL Dec 13 01:30:23.650291 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:23.658262 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:30:23.667771 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:30:23.691074 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:23.691159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:30:23.732651 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:30:23.733992 ignition[1243]: INFO : Ignition 2.19.0 Dec 13 01:30:23.735178 ignition[1243]: INFO : Stage: mount Dec 13 01:30:23.735178 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:23.735178 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:23.735178 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:23.742343 ignition[1243]: INFO : PUT result: OK Dec 13 01:30:23.744921 ignition[1243]: INFO : mount: mount passed Dec 13 01:30:23.745870 ignition[1243]: INFO : Ignition finished successfully Dec 13 01:30:23.747672 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:30:23.756633 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:30:23.779623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:30:23.793061 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1254) Dec 13 01:30:23.793126 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:30:23.794775 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:30:23.794835 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:30:23.800243 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:30:23.802760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:30:23.874064 ignition[1271]: INFO : Ignition 2.19.0 Dec 13 01:30:23.874064 ignition[1271]: INFO : Stage: files Dec 13 01:30:23.880081 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:23.880081 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:23.883791 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:23.887753 ignition[1271]: INFO : PUT result: OK Dec 13 01:30:23.891329 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:30:23.906743 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:30:23.906743 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:30:23.944992 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:30:23.946986 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:30:23.950327 unknown[1271]: wrote ssh authorized keys file for user: core Dec 13 01:30:23.952656 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:30:23.957272 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:30:23.960906 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:30:24.553796 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:30:24.859161 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:30:24.861567 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:24.861567 ignition[1271]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:30:24.861567 ignition[1271]: INFO : files: files passed Dec 13 01:30:24.861567 ignition[1271]: INFO : Ignition finished successfully Dec 13 01:30:24.867817 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:30:24.884369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:30:24.891272 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:30:24.906460 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:30:24.906609 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:30:24.935092 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:24.935092 initrd-setup-root-after-ignition[1299]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:24.948664 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:30:24.960859 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:24.968983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:30:24.984942 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:30:25.041000 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:30:25.041282 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:30:25.041998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:30:25.042287 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:30:25.042524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:30:25.046481 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:30:25.085915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:25.095300 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:30:25.131869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:25.136373 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:25.138372 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:30:25.143195 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:30:25.143390 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:30:25.151774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:30:25.154337 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:30:25.156922 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:30:25.158531 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:30:25.161356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:30:25.165587 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:30:25.168676 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:30:25.172312 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:30:25.177070 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:30:25.183451 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:30:25.184369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:30:25.184897 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:30:25.196085 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:25.201777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:25.205204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:30:25.206320 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:25.209172 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:30:25.215154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:30:25.219157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:30:25.219296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:30:25.220984 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:30:25.221104 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:30:25.237148 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:30:25.248762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:30:25.253217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:30:25.256359 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:25.260227 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:30:25.261022 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:30:25.269890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:30:25.270016 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:30:25.292223 ignition[1323]: INFO : Ignition 2.19.0 Dec 13 01:30:25.292223 ignition[1323]: INFO : Stage: umount Dec 13 01:30:25.295961 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:30:25.295961 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:30:25.295961 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:30:25.302856 ignition[1323]: INFO : PUT result: OK Dec 13 01:30:25.306813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:30:25.308190 ignition[1323]: INFO : umount: umount passed Dec 13 01:30:25.309127 ignition[1323]: INFO : Ignition finished successfully Dec 13 01:30:25.313605 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:30:25.315299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:30:25.319868 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:30:25.319993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:30:25.322575 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:30:25.322650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:30:25.325218 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:30:25.325285 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:30:25.330292 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:30:25.330369 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:30:25.333794 systemd[1]: Stopped target network.target - Network. Dec 13 01:30:25.345978 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:30:25.346363 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:30:25.350370 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:30:25.351559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:30:25.354097 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:25.356405 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:30:25.360187 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:30:25.364810 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:30:25.364861 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:30:25.369865 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:30:25.369939 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:30:25.372570 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:30:25.372656 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:30:25.375420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:30:25.375501 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:30:25.378479 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:30:25.378542 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:30:25.380782 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:30:25.384068 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:30:25.399863 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:30:25.400094 systemd-networkd[1083]: eth0: DHCPv6 lease lost Dec 13 01:30:25.402276 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:30:25.406899 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:30:25.408081 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:30:25.411315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:30:25.411390 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:25.421436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:30:25.423949 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:30:25.424066 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:30:25.427994 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:30:25.428101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:25.430439 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:30:25.430500 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:25.433896 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:30:25.433969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:25.445415 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:25.463855 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:30:25.464066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:25.469027 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:30:25.469554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:30:25.473031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:30:25.473138 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:25.474340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:30:25.474385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:25.491745 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:30:25.491819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:30:25.496984 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:30:25.497070 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:30:25.502678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:30:25.502882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:30:25.519673 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:30:25.522734 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:30:25.522836 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:25.528217 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:30:25.528294 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:30:25.531145 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:30:25.531277 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:25.533242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:30:25.533310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:25.536089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:30:25.536175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:30:25.544048 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:30:25.556657 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:30:25.574081 systemd[1]: Switching root. Dec 13 01:30:25.609281 systemd-journald[178]: Journal stopped Dec 13 01:30:28.362329 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:30:28.362443 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:30:28.362512 kernel: SELinux: policy capability open_perms=1 Dec 13 01:30:28.362550 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:30:28.362573 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:30:28.362762 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:30:28.362789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:30:28.362820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:30:28.362844 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:30:28.362867 kernel: audit: type=1403 audit(1734053426.751:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:30:28.362898 systemd[1]: Successfully loaded SELinux policy in 81.909ms. Dec 13 01:30:28.362940 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.004ms. Dec 13 01:30:28.362967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:30:28.362993 systemd[1]: Detected virtualization amazon. Dec 13 01:30:28.363018 systemd[1]: Detected architecture x86-64. Dec 13 01:30:28.375165 systemd[1]: Detected first boot. Dec 13 01:30:28.375266 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:30:28.375305 zram_generator::config[1365]: No configuration found. Dec 13 01:30:28.375328 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:30:28.375350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:30:28.375372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:30:28.375393 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:30:28.375418 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:30:28.375438 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:30:28.375458 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:30:28.375481 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:30:28.375501 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:30:28.375521 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:30:28.375598 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:30:28.375622 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:30:28.375641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:30:28.375659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:30:28.375680 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:30:28.375701 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:30:28.375725 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:30:28.375745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:30:28.375764 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:30:28.375782 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:30:28.375803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:30:28.375824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:30:28.375846 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:30:28.375870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:30:28.375890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:30:28.375914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:30:28.375934 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:30:28.375954 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:30:28.375974 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:30:28.375994 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:30:28.376017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:30:28.376073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:30:28.376093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:30:28.376118 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:30:28.376138 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:30:28.376156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:30:28.376175 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:30:28.376194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:28.376216 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:30:28.376237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:30:28.376259 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:30:28.376285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:30:28.376306 systemd[1]: Reached target machines.target - Containers. Dec 13 01:30:28.376329 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:30:28.376352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:28.376374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:30:28.376396 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:30:28.376419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:28.376440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:30:28.376469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:28.376778 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:30:28.376831 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:28.376853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:30:28.376874 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:30:28.376894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:30:28.376914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:30:28.376932 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:30:28.376949 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:30:28.377023 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:30:28.377061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:30:28.377081 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:30:28.377099 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:30:28.377118 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:30:28.377137 systemd[1]: Stopped verity-setup.service. Dec 13 01:30:28.377160 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:28.377179 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:30:28.377198 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:30:28.377222 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:30:28.377241 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:30:28.377259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:30:28.377278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:30:28.377298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:30:28.377320 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:30:28.377341 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:30:28.377360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:28.377378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:28.377397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:28.377415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:28.377434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:30:28.377457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:30:28.377475 kernel: loop: module loaded Dec 13 01:30:28.377496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:30:28.377514 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:30:28.377619 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:30:28.377640 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:30:28.377661 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:30:28.377739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:30:28.377809 systemd-journald[1444]: Collecting audit messages is disabled. Dec 13 01:30:28.377846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:30:28.377865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:30:28.377923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:28.377948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:30:28.377969 systemd-journald[1444]: Journal started Dec 13 01:30:28.386094 systemd-journald[1444]: Runtime Journal (/run/log/journal/ec2815a38088a322bda5a6e509cfdf4e) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:30:28.386195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:27.778017 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:30:27.818467 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:30:27.818860 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:30:28.399126 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:30:28.412767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:28.412872 kernel: fuse: init (API version 7.39) Dec 13 01:30:28.412916 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:30:28.433072 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:30:28.455146 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:30:28.445893 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:30:28.447298 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:30:28.449525 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:28.449814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:28.452250 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:30:28.454866 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:30:28.472214 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:30:28.513161 kernel: ACPI: bus type drm_connector registered Dec 13 01:30:28.515853 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:30:28.518507 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:30:28.554099 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:30:28.568331 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:30:28.570229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:28.574253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:30:28.576430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:30:28.588263 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:30:28.586993 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:30:28.602482 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:30:28.682646 systemd-journald[1444]: Time spent on flushing to /var/log/journal/ec2815a38088a322bda5a6e509cfdf4e is 148.569ms for 949 entries. Dec 13 01:30:28.682646 systemd-journald[1444]: System Journal (/var/log/journal/ec2815a38088a322bda5a6e509cfdf4e) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:30:28.856910 systemd-journald[1444]: Received client request to flush runtime journal. Dec 13 01:30:28.856993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:30:28.857026 kernel: loop1: detected capacity change from 0 to 61336 Dec 13 01:30:28.735692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:28.748403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:30:28.753976 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Dec 13 01:30:28.754000 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Dec 13 01:30:28.762236 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:30:28.789716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:30:28.792519 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:30:28.805117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:30:28.813737 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:30:28.839516 udevadm[1504]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:30:28.866362 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:30:28.913786 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:30:28.930592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:30:28.967630 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Dec 13 01:30:28.967661 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Dec 13 01:30:28.978307 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:30:28.997121 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:30:29.161606 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 01:30:29.218063 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:30:29.259158 kernel: loop5: detected capacity change from 0 to 61336 Dec 13 01:30:29.274062 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:30:29.307066 kernel: loop7: detected capacity change from 0 to 205544 Dec 13 01:30:29.327856 (sd-merge)[1521]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:30:29.329083 (sd-merge)[1521]: Merged extensions into '/usr'. Dec 13 01:30:29.341416 systemd[1]: Reloading requested from client PID 1472 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:30:29.341491 systemd[1]: Reloading... Dec 13 01:30:29.490069 zram_generator::config[1546]: No configuration found. Dec 13 01:30:29.807480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:29.929803 systemd[1]: Reloading finished in 586 ms. Dec 13 01:30:29.961819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:30:29.976387 systemd[1]: Starting ensure-sysext.service... Dec 13 01:30:29.985352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:29.999272 systemd[1]: Reloading requested from client PID 1595 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:30:29.999293 systemd[1]: Reloading... Dec 13 01:30:30.025946 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:30:30.027131 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:30:30.028951 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:30:30.029614 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Dec 13 01:30:30.029836 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Dec 13 01:30:30.059733 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:30.059753 systemd-tmpfiles[1596]: Skipping /boot Dec 13 01:30:30.082326 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:30.082347 systemd-tmpfiles[1596]: Skipping /boot Dec 13 01:30:30.153061 zram_generator::config[1624]: No configuration found. Dec 13 01:30:30.424089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:30.505703 systemd[1]: Reloading finished in 505 ms. Dec 13 01:30:30.525381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:30:30.532568 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:30.565294 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:30.572065 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:30:30.577395 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:30:30.600687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:30.608890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:30.620322 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:30:30.636856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.637165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:30.650712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:30.659494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:30.674335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:30.676452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:30.676654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.678003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:30.683775 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:30.696102 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Dec 13 01:30:30.703737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.704251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:30.712832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:30.716368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:30.727188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:30:30.729608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.741972 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:30:30.744812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:30.745025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:30.747814 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:30.748212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:30.760697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:30.761133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:30.779095 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:30:30.782936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.785337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:30.796984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:30.808360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:30:30.820270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:30.838139 ldconfig[1467]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:30:30.830338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:30.831948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:30.832158 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:30:30.833758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:30:30.834383 systemd[1]: Finished ensure-sysext.service. Dec 13 01:30:30.844676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:30.862366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:30.865391 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:30:30.876677 augenrules[1714]: No rules Dec 13 01:30:30.882451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:30:30.884890 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:30.896767 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:30:30.910474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:30:30.918958 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:30:30.939178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:30.939391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:30.942142 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:30:30.945109 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:30:30.946966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:30.947643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:30.951073 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:30:30.954225 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:30.954436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:30.989382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:30.989470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:31.065629 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:30:31.080083 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1720) Dec 13 01:30:31.084720 (udev-worker)[1742]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:31.121767 systemd-networkd[1713]: lo: Link UP Dec 13 01:30:31.121784 systemd-networkd[1713]: lo: Gained carrier Dec 13 01:30:31.125404 systemd-networkd[1713]: Enumeration completed Dec 13 01:30:31.125551 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:31.126482 systemd-networkd[1713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:31.130114 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1720) Dec 13 01:30:31.126488 systemd-networkd[1713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:31.135223 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:30:31.137842 systemd-networkd[1713]: eth0: Link UP Dec 13 01:30:31.138095 systemd-networkd[1713]: eth0: Gained carrier Dec 13 01:30:31.138124 systemd-networkd[1713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:31.149381 systemd-resolved[1685]: Positive Trust Anchors: Dec 13 01:30:31.149391 systemd-resolved[1685]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:31.149444 systemd-resolved[1685]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:31.154286 systemd-networkd[1713]: eth0: DHCPv4 address 172.31.22.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:30:31.160726 systemd-resolved[1685]: Defaulting to hostname 'linux'. Dec 13 01:30:31.165457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:31.167259 systemd[1]: Reached target network.target - Network. Dec 13 01:30:31.168706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:31.169933 systemd-networkd[1713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:31.219071 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:30:31.225384 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:30:31.268849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:30:31.282576 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:30:31.282692 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:30:31.283169 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:30:31.293115 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:30:31.296151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:31.313144 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1745) Dec 13 01:30:31.510909 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:30:31.641533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:30:31.654624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:31.674260 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:30:31.679860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:30:31.704925 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:31.744565 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:30:31.750623 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:30:31.752547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:31.754397 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:31.755890 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:30:31.757958 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:30:31.761066 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:30:31.763826 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:30:31.766963 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:30:31.768752 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:30:31.768798 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:31.783205 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:31.790254 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:30:31.798905 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:30:31.811297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:30:31.814559 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:30:31.816648 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:30:31.818243 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:31.819262 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:31.820243 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:31.820283 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:31.836924 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:30:31.846892 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:30:31.852928 lvm[1857]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:31.862071 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:30:31.872235 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:30:31.878274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:30:31.879620 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:30:31.919633 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:30:31.930557 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:30:31.943291 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:30:31.949608 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:30:31.960592 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:30:31.971416 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:30:31.974763 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:30:31.975475 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:30:31.980252 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:30:31.985190 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:30:31.999861 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:30:32.021514 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:30:32.021800 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:30:32.045940 jq[1861]: false Dec 13 01:30:32.046638 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:30:32.046935 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:30:32.073238 jq[1873]: true Dec 13 01:30:32.119096 dbus-daemon[1860]: [system] SELinux support is enabled Dec 13 01:30:32.119320 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:30:32.124535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:30:32.124590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:30:32.126308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:30:32.126347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:30:32.136588 jq[1889]: true Dec 13 01:30:32.146805 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:30:32.149181 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:30:32.152928 dbus-daemon[1860]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1713 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:30:32.169068 extend-filesystems[1862]: Found loop4 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found loop5 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found loop6 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found loop7 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p2 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p3 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found usr Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p4 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p6 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p7 Dec 13 01:30:32.169068 extend-filesystems[1862]: Found nvme0n1p9 Dec 13 01:30:32.169068 extend-filesystems[1862]: Checking size of /dev/nvme0n1p9 Dec 13 01:30:32.193381 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:30:32.203670 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:30:32.209069 update_engine[1872]: I20241213 01:30:32.207909 1872 main.cc:92] Flatcar Update Engine starting Dec 13 01:30:32.216763 (ntainerd)[1894]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:30:32.219395 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:30:32.221849 update_engine[1872]: I20241213 01:30:32.220710 1872 update_check_scheduler.cc:74] Next update check in 8m49s Dec 13 01:30:32.223062 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: ---------------------------------------------------- Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: corporation. Support and training for ntp-4 are Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: available at https://www.nwtime.org/support Dec 13 01:30:32.223990 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: ---------------------------------------------------- Dec 13 01:30:32.223092 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:30:32.223103 ntpd[1864]: ---------------------------------------------------- Dec 13 01:30:32.223112 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:30:32.223122 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:30:32.223131 ntpd[1864]: corporation. Support and training for ntp-4 are Dec 13 01:30:32.223140 ntpd[1864]: available at https://www.nwtime.org/support Dec 13 01:30:32.223149 ntpd[1864]: ---------------------------------------------------- Dec 13 01:30:32.232689 ntpd[1864]: proto: precision = 0.060 usec (-24) Dec 13 01:30:32.233283 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: proto: precision = 0.060 usec (-24) Dec 13 01:30:32.233028 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:30:32.236367 ntpd[1864]: basedate set to 2024-11-30 Dec 13 01:30:32.240245 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: basedate set to 2024-11-30 Dec 13 01:30:32.240245 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: gps base set to 2024-12-01 (week 2343) Dec 13 01:30:32.236396 ntpd[1864]: gps base set to 2024-12-01 (week 2343) Dec 13 01:30:32.252078 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listen normally on 3 eth0 172.31.22.26:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listen normally on 4 lo [::1]:123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: bind(21) AF_INET6 fe80::402:21ff:fee2:310b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: unable to create socket on eth0 (5) for fe80::402:21ff:fee2:310b%2#123 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: failed to init interface for address fe80::402:21ff:fee2:310b%2 Dec 13 01:30:32.254867 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Dec 13 01:30:32.253147 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:30:32.253359 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:30:32.253400 ntpd[1864]: Listen normally on 3 eth0 172.31.22.26:123 Dec 13 01:30:32.253445 ntpd[1864]: Listen normally on 4 lo [::1]:123 Dec 13 01:30:32.253493 ntpd[1864]: bind(21) AF_INET6 fe80::402:21ff:fee2:310b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:30:32.253517 ntpd[1864]: unable to create socket on eth0 (5) for fe80::402:21ff:fee2:310b%2#123 Dec 13 01:30:32.253534 ntpd[1864]: failed to init interface for address fe80::402:21ff:fee2:310b%2 Dec 13 01:30:32.253569 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Dec 13 01:30:32.262689 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:32.265587 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:32.265587 ntpd[1864]: 13 Dec 01:30:32 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:32.262733 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:30:32.276009 systemd-logind[1871]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:30:32.277390 extend-filesystems[1862]: Resized partition /dev/nvme0n1p9 Dec 13 01:30:32.277559 systemd-logind[1871]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:30:32.277585 systemd-logind[1871]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:30:32.287468 systemd-logind[1871]: New seat seat0. Dec 13 01:30:32.291802 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:30:32.303798 extend-filesystems[1918]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:30:32.327347 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:30:32.354189 coreos-metadata[1859]: Dec 13 01:30:32.354 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:30:32.357375 coreos-metadata[1859]: Dec 13 01:30:32.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:30:32.358090 coreos-metadata[1859]: Dec 13 01:30:32.357 INFO Fetch successful Dec 13 01:30:32.358090 coreos-metadata[1859]: Dec 13 01:30:32.357 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:30:32.359713 coreos-metadata[1859]: Dec 13 01:30:32.359 INFO Fetch successful Dec 13 01:30:32.359713 coreos-metadata[1859]: Dec 13 01:30:32.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:30:32.360702 coreos-metadata[1859]: Dec 13 01:30:32.360 INFO Fetch successful Dec 13 01:30:32.360702 coreos-metadata[1859]: Dec 13 01:30:32.360 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:30:32.361530 coreos-metadata[1859]: Dec 13 01:30:32.361 INFO Fetch successful Dec 13 01:30:32.363365 coreos-metadata[1859]: Dec 13 01:30:32.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.363 INFO Fetch failed with 404: resource not found Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.365 INFO Fetch successful Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.366 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.366 INFO Fetch successful Dec 13 01:30:32.367485 coreos-metadata[1859]: Dec 13 01:30:32.366 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:30:32.372478 coreos-metadata[1859]: Dec 13 01:30:32.368 INFO Fetch successful Dec 13 01:30:32.372478 coreos-metadata[1859]: Dec 13 01:30:32.368 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:30:32.372478 coreos-metadata[1859]: Dec 13 01:30:32.369 INFO Fetch successful Dec 13 01:30:32.372478 coreos-metadata[1859]: Dec 13 01:30:32.369 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:30:32.375227 coreos-metadata[1859]: Dec 13 01:30:32.375 INFO Fetch successful Dec 13 01:30:32.436439 systemd-networkd[1713]: eth0: Gained IPv6LL Dec 13 01:30:32.460539 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:30:32.473448 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:30:32.466397 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:30:32.477446 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:30:32.487715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:32.510991 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:30:32.542100 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1737) Dec 13 01:30:32.542310 extend-filesystems[1918]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:30:32.542310 extend-filesystems[1918]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:30:32.542310 extend-filesystems[1918]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:30:32.531761 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:30:32.549731 extend-filesystems[1862]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:30:32.549731 extend-filesystems[1862]: Found nvme0n1p1 Dec 13 01:30:32.533994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:30:32.552358 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:30:32.557640 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:30:32.579255 bash[1926]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:30:32.570162 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:30:32.585391 systemd[1]: Starting sshkeys.service... Dec 13 01:30:32.707977 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:30:32.720808 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:30:32.733218 amazon-ssm-agent[1938]: Initializing new seelog logger Dec 13 01:30:32.734454 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:30:32.768729 amazon-ssm-agent[1938]: New Seelog Logger Creation Complete Dec 13 01:30:32.768729 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.768729 amazon-ssm-agent[1938]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.768729 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 processing appconfig overrides Dec 13 01:30:32.775057 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.775057 amazon-ssm-agent[1938]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.775057 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 processing appconfig overrides Dec 13 01:30:32.775057 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO Proxy environment variables: Dec 13 01:30:32.778213 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.778213 amazon-ssm-agent[1938]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.778213 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 processing appconfig overrides Dec 13 01:30:32.790523 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.790646 amazon-ssm-agent[1938]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:30:32.790861 amazon-ssm-agent[1938]: 2024/12/13 01:30:32 processing appconfig overrides Dec 13 01:30:32.804637 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:30:32.804858 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:30:32.838279 dbus-daemon[1860]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1902 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:30:32.866269 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:30:32.882503 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO https_proxy: Dec 13 01:30:32.962972 polkitd[2012]: Started polkitd version 121 Dec 13 01:30:32.980507 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO http_proxy: Dec 13 01:30:33.038829 polkitd[2012]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:30:33.039131 polkitd[2012]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:30:33.058299 polkitd[2012]: Finished loading, compiling and executing 2 rules Dec 13 01:30:33.061821 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:30:33.064941 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:30:33.072115 polkitd[2012]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:30:33.079119 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO no_proxy: Dec 13 01:30:33.187738 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:30:33.232292 systemd-hostnamed[1902]: Hostname set to (transient) Dec 13 01:30:33.237669 systemd-resolved[1685]: System hostname changed to 'ip-172-31-22-26'. Dec 13 01:30:33.311265 amazon-ssm-agent[1938]: 2024-12-13 01:30:32 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:30:33.321867 locksmithd[1907]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:30:33.377358 coreos-metadata[1990]: Dec 13 01:30:33.376 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:30:33.387071 coreos-metadata[1990]: Dec 13 01:30:33.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:30:33.387071 coreos-metadata[1990]: Dec 13 01:30:33.386 INFO Fetch successful Dec 13 01:30:33.387071 coreos-metadata[1990]: Dec 13 01:30:33.386 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:30:33.389671 coreos-metadata[1990]: Dec 13 01:30:33.389 INFO Fetch successful Dec 13 01:30:33.394359 unknown[1990]: wrote ssh authorized keys file for user: core Dec 13 01:30:33.410741 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO Agent will take identity from EC2 Dec 13 01:30:33.460881 update-ssh-keys[2071]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:30:33.463778 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:30:33.474175 systemd[1]: Finished sshkeys.service. Dec 13 01:30:33.507649 containerd[1894]: time="2024-12-13T01:30:33.507537954Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:30:33.509333 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:30:33.608984 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:30:33.645284 containerd[1894]: time="2024-12-13T01:30:33.645124507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.650728557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.650785660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.650812293Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.650977721Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.650999853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.651111801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:33.651212 containerd[1894]: time="2024-12-13T01:30:33.651132297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.652263 containerd[1894]: time="2024-12-13T01:30:33.652230816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:33.652825 containerd[1894]: time="2024-12-13T01:30:33.652801742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.653336 containerd[1894]: time="2024-12-13T01:30:33.653290521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:33.653440 containerd[1894]: time="2024-12-13T01:30:33.653424312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.653641 containerd[1894]: time="2024-12-13T01:30:33.653612536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.655078 containerd[1894]: time="2024-12-13T01:30:33.654310742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:33.655497 containerd[1894]: time="2024-12-13T01:30:33.655471778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:33.656215 sshd_keygen[1896]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:30:33.656565 containerd[1894]: time="2024-12-13T01:30:33.656543353Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:30:33.656734 containerd[1894]: time="2024-12-13T01:30:33.656717275Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:30:33.657555 containerd[1894]: time="2024-12-13T01:30:33.657532949Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670114302Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670212516Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670236832Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670312377Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670336981Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.670526356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671552455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671743145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671775094Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671798104Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671827824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671856280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671882935Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672123 containerd[1894]: time="2024-12-13T01:30:33.671911423Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.671940132Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.671967822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672017992Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672060917Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672094624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672121877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672143350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672169213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672192392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672215685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672233911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672258935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672284524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.672705 containerd[1894]: time="2024-12-13T01:30:33.672314493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672423081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672450840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672476933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672510084Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672552552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672578460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672597883Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672678324Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672712468Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672854270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672882692Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672900110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672926876Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:30:33.673244 containerd[1894]: time="2024-12-13T01:30:33.672950175Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:30:33.675337 containerd[1894]: time="2024-12-13T01:30:33.672967574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:30:33.688473 containerd[1894]: time="2024-12-13T01:30:33.686352448Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:30:33.688473 containerd[1894]: time="2024-12-13T01:30:33.686517045Z" level=info msg="Connect containerd service" Dec 13 01:30:33.688473 containerd[1894]: time="2024-12-13T01:30:33.686600260Z" level=info msg="using legacy CRI server" Dec 13 01:30:33.688473 containerd[1894]: time="2024-12-13T01:30:33.686612624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:30:33.701260 containerd[1894]: time="2024-12-13T01:30:33.701202517Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:30:33.706391 containerd[1894]: time="2024-12-13T01:30:33.706283322Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:30:33.706627 containerd[1894]: time="2024-12-13T01:30:33.706581924Z" level=info msg="Start subscribing containerd event" Dec 13 01:30:33.706680 containerd[1894]: time="2024-12-13T01:30:33.706658997Z" level=info msg="Start recovering state" Dec 13 01:30:33.706758 containerd[1894]: time="2024-12-13T01:30:33.706742478Z" level=info msg="Start event monitor" Dec 13 01:30:33.709216 containerd[1894]: time="2024-12-13T01:30:33.706774559Z" level=info msg="Start snapshots syncer" Dec 13 01:30:33.709216 containerd[1894]: time="2024-12-13T01:30:33.706789026Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:30:33.709216 containerd[1894]: time="2024-12-13T01:30:33.706804214Z" level=info msg="Start streaming server" Dec 13 01:30:33.709216 containerd[1894]: time="2024-12-13T01:30:33.707280323Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:30:33.709216 containerd[1894]: time="2024-12-13T01:30:33.707352046Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:30:33.707581 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:30:33.709800 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:30:33.714133 containerd[1894]: time="2024-12-13T01:30:33.714088180Z" level=info msg="containerd successfully booted in 0.209258s" Dec 13 01:30:33.779627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:30:33.790455 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:30:33.807115 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:30:33.807349 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:30:33.809145 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:30:33.818457 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:30:33.846090 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:30:33.859028 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:30:33.870720 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:30:33.872374 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:30:33.910050 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:30:34.009607 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [Registrar] Starting registrar module Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:33 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:30:34.026093 amazon-ssm-agent[1938]: 2024-12-13 01:30:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:30:34.109603 amazon-ssm-agent[1938]: 2024-12-13 01:30:34 INFO [CredentialRefresher] Next credential rotation will be in 32.25832643571667 minutes Dec 13 01:30:34.575737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:34.577889 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:30:34.741847 systemd[1]: Startup finished in 914ms (kernel) + 8.965s (initrd) + 8.071s (userspace) = 17.951s. Dec 13 01:30:34.749600 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:35.047958 amazon-ssm-agent[1938]: 2024-12-13 01:30:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:30:35.148387 amazon-ssm-agent[1938]: 2024-12-13 01:30:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2110) started Dec 13 01:30:35.223570 ntpd[1864]: Listen normally on 6 eth0 [fe80::402:21ff:fee2:310b%2]:123 Dec 13 01:30:35.223909 ntpd[1864]: 13 Dec 01:30:35 ntpd[1864]: Listen normally on 6 eth0 [fe80::402:21ff:fee2:310b%2]:123 Dec 13 01:30:35.249803 amazon-ssm-agent[1938]: 2024-12-13 01:30:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:30:35.449144 kubelet[2100]: E1213 01:30:35.449064 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:35.451642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:35.451839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:39.382964 systemd-resolved[1685]: Clock change detected. Flushing caches. Dec 13 01:30:41.837327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:30:41.846708 systemd[1]: Started sshd@0-172.31.22.26:22-139.178.68.195:44836.service - OpenSSH per-connection server daemon (139.178.68.195:44836). Dec 13 01:30:42.063431 sshd[2125]: Accepted publickey for core from 139.178.68.195 port 44836 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:42.067239 sshd[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:42.086184 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:30:42.094496 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:30:42.100249 systemd-logind[1871]: New session 1 of user core. Dec 13 01:30:42.141646 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:30:42.150211 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:30:42.171550 (systemd)[2129]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:30:42.333000 systemd[2129]: Queued start job for default target default.target. Dec 13 01:30:42.344228 systemd[2129]: Created slice app.slice - User Application Slice. Dec 13 01:30:42.344271 systemd[2129]: Reached target paths.target - Paths. Dec 13 01:30:42.344292 systemd[2129]: Reached target timers.target - Timers. Dec 13 01:30:42.347057 systemd[2129]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:30:42.369262 systemd[2129]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:30:42.369352 systemd[2129]: Reached target sockets.target - Sockets. Dec 13 01:30:42.369373 systemd[2129]: Reached target basic.target - Basic System. Dec 13 01:30:42.369433 systemd[2129]: Reached target default.target - Main User Target. Dec 13 01:30:42.369474 systemd[2129]: Startup finished in 187ms. Dec 13 01:30:42.369855 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:30:42.384162 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:30:42.544411 systemd[1]: Started sshd@1-172.31.22.26:22-139.178.68.195:44842.service - OpenSSH per-connection server daemon (139.178.68.195:44842). Dec 13 01:30:42.737792 sshd[2140]: Accepted publickey for core from 139.178.68.195 port 44842 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:42.739900 sshd[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:42.751756 systemd-logind[1871]: New session 2 of user core. Dec 13 01:30:42.769408 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:30:42.916644 sshd[2140]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:42.926804 systemd[1]: sshd@1-172.31.22.26:22-139.178.68.195:44842.service: Deactivated successfully. Dec 13 01:30:42.932407 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:30:42.934988 systemd-logind[1871]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:30:42.955224 systemd-logind[1871]: Removed session 2. Dec 13 01:30:42.956689 systemd[1]: Started sshd@2-172.31.22.26:22-139.178.68.195:44852.service - OpenSSH per-connection server daemon (139.178.68.195:44852). Dec 13 01:30:43.136176 sshd[2147]: Accepted publickey for core from 139.178.68.195 port 44852 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:43.139130 sshd[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:43.146116 systemd-logind[1871]: New session 3 of user core. Dec 13 01:30:43.149306 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:30:43.269094 sshd[2147]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:43.272452 systemd[1]: sshd@2-172.31.22.26:22-139.178.68.195:44852.service: Deactivated successfully. Dec 13 01:30:43.284269 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:30:43.288796 systemd-logind[1871]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:30:43.311314 systemd[1]: Started sshd@3-172.31.22.26:22-139.178.68.195:44868.service - OpenSSH per-connection server daemon (139.178.68.195:44868). Dec 13 01:30:43.312957 systemd-logind[1871]: Removed session 3. Dec 13 01:30:43.484058 sshd[2154]: Accepted publickey for core from 139.178.68.195 port 44868 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:43.486137 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:43.492442 systemd-logind[1871]: New session 4 of user core. Dec 13 01:30:43.500182 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:30:43.622844 sshd[2154]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:43.627646 systemd[1]: sshd@3-172.31.22.26:22-139.178.68.195:44868.service: Deactivated successfully. Dec 13 01:30:43.630419 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:30:43.631580 systemd-logind[1871]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:30:43.633658 systemd-logind[1871]: Removed session 4. Dec 13 01:30:43.665194 systemd[1]: Started sshd@4-172.31.22.26:22-139.178.68.195:44880.service - OpenSSH per-connection server daemon (139.178.68.195:44880). Dec 13 01:30:43.851513 sshd[2161]: Accepted publickey for core from 139.178.68.195 port 44880 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:43.853863 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:43.873657 systemd-logind[1871]: New session 5 of user core. Dec 13 01:30:43.882226 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:30:44.032529 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:30:44.032999 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:44.059673 sudo[2164]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:44.093050 sshd[2161]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:44.101690 systemd[1]: sshd@4-172.31.22.26:22-139.178.68.195:44880.service: Deactivated successfully. Dec 13 01:30:44.107859 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:30:44.112401 systemd-logind[1871]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:30:44.139378 systemd[1]: Started sshd@5-172.31.22.26:22-139.178.68.195:44884.service - OpenSSH per-connection server daemon (139.178.68.195:44884). Dec 13 01:30:44.140952 systemd-logind[1871]: Removed session 5. Dec 13 01:30:44.317778 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 44884 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:44.319342 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:44.329072 systemd-logind[1871]: New session 6 of user core. Dec 13 01:30:44.341609 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:30:44.450755 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:30:44.451191 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:44.471749 sudo[2173]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:44.482549 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:30:44.485514 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:44.517820 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:44.545767 auditctl[2176]: No rules Dec 13 01:30:44.546262 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:30:44.546783 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:44.557433 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:44.619490 augenrules[2194]: No rules Dec 13 01:30:44.621210 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:44.623661 sudo[2172]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:44.647520 sshd[2169]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:44.653229 systemd[1]: sshd@5-172.31.22.26:22-139.178.68.195:44884.service: Deactivated successfully. Dec 13 01:30:44.655545 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:30:44.656919 systemd-logind[1871]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:30:44.658954 systemd-logind[1871]: Removed session 6. Dec 13 01:30:44.685454 systemd[1]: Started sshd@6-172.31.22.26:22-139.178.68.195:44894.service - OpenSSH per-connection server daemon (139.178.68.195:44894). Dec 13 01:30:44.849528 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 44894 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:44.851059 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:44.868253 systemd-logind[1871]: New session 7 of user core. Dec 13 01:30:44.875165 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:30:44.970569 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:30:44.970974 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:45.791876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:30:45.799576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:45.997326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:30:45.997549 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:30:45.998246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:46.014444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:46.073863 systemd[1]: Reloading requested from client PID 2241 ('systemctl') (unit session-7.scope)... Dec 13 01:30:46.073882 systemd[1]: Reloading... Dec 13 01:30:46.279227 zram_generator::config[2284]: No configuration found. Dec 13 01:30:46.498242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:46.665655 systemd[1]: Reloading finished in 591 ms. Dec 13 01:30:46.745519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:46.750920 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:30:46.751510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:46.759480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:47.343172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:47.347424 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:30:47.415053 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:30:47.416030 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:30:47.416086 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:30:47.419392 kubelet[2343]: I1213 01:30:47.419334 2343 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:30:47.969159 kubelet[2343]: I1213 01:30:47.969119 2343 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:30:47.969159 kubelet[2343]: I1213 01:30:47.969147 2343 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:30:47.969461 kubelet[2343]: I1213 01:30:47.969439 2343 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:30:48.013150 kubelet[2343]: I1213 01:30:48.012675 2343 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:30:48.022301 kubelet[2343]: E1213 01:30:48.022250 2343 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:30:48.022301 kubelet[2343]: I1213 01:30:48.022298 2343 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:30:48.027336 kubelet[2343]: I1213 01:30:48.027218 2343 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:30:48.028674 kubelet[2343]: I1213 01:30:48.028644 2343 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:30:48.029000 kubelet[2343]: I1213 01:30:48.028842 2343 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:30:48.029204 kubelet[2343]: I1213 01:30:48.029001 2343 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.22.26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:30:48.029344 kubelet[2343]: I1213 01:30:48.029216 2343 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:30:48.029344 kubelet[2343]: I1213 01:30:48.029231 2343 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:30:48.029514 kubelet[2343]: I1213 01:30:48.029354 2343 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:30:48.032252 kubelet[2343]: I1213 01:30:48.031694 2343 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:30:48.032252 kubelet[2343]: I1213 01:30:48.031725 2343 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:30:48.032252 kubelet[2343]: I1213 01:30:48.031767 2343 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:30:48.032252 kubelet[2343]: I1213 01:30:48.031785 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:30:48.038328 kubelet[2343]: E1213 01:30:48.038294 2343 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:48.038516 kubelet[2343]: E1213 01:30:48.038503 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:48.039507 kubelet[2343]: I1213 01:30:48.039487 2343 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:30:48.041732 kubelet[2343]: I1213 01:30:48.041709 2343 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:30:48.042364 kubelet[2343]: W1213 01:30:48.042333 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:30:48.043081 kubelet[2343]: I1213 01:30:48.043061 2343 server.go:1269] "Started kubelet" Dec 13 01:30:48.043807 kubelet[2343]: I1213 01:30:48.043765 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:30:48.045327 kubelet[2343]: I1213 01:30:48.045300 2343 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:30:48.050956 kubelet[2343]: I1213 01:30:48.050346 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:30:48.050956 kubelet[2343]: I1213 01:30:48.050714 2343 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:30:48.051384 kubelet[2343]: I1213 01:30:48.051365 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:30:48.053709 kubelet[2343]: I1213 01:30:48.052887 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:30:48.057542 kubelet[2343]: I1213 01:30:48.057509 2343 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:30:48.057819 kubelet[2343]: E1213 01:30:48.057793 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.058432 kubelet[2343]: I1213 01:30:48.058401 2343 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:30:48.059657 kubelet[2343]: I1213 01:30:48.059636 2343 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:30:48.066382 kubelet[2343]: I1213 01:30:48.066351 2343 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:30:48.066519 kubelet[2343]: I1213 01:30:48.066471 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:30:48.069460 kubelet[2343]: I1213 01:30:48.069428 2343 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:30:48.080642 kubelet[2343]: E1213 01:30:48.080606 2343 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:30:48.095731 kubelet[2343]: I1213 01:30:48.095688 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:30:48.095731 kubelet[2343]: I1213 01:30:48.095708 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:30:48.096026 kubelet[2343]: I1213 01:30:48.095748 2343 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:30:48.099726 kubelet[2343]: I1213 01:30:48.099701 2343 policy_none.go:49] "None policy: Start" Dec 13 01:30:48.100470 kubelet[2343]: I1213 01:30:48.100447 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:30:48.100558 kubelet[2343]: I1213 01:30:48.100485 2343 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:30:48.118601 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:30:48.128815 kubelet[2343]: W1213 01:30:48.126114 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.22.26" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:30:48.128815 kubelet[2343]: W1213 01:30:48.126276 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:30:48.128815 kubelet[2343]: E1213 01:30:48.126296 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:30:48.128815 kubelet[2343]: E1213 01:30:48.126354 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.22.26\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:30:48.128815 kubelet[2343]: E1213 01:30:48.126823 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.26\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:30:48.128815 kubelet[2343]: W1213 01:30:48.126890 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:30:48.128815 kubelet[2343]: E1213 01:30:48.126910 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 13 01:30:48.130654 kubelet[2343]: E1213 01:30:48.126494 2343 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.26.1810986c702588b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.26,UID:172.31.22.26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.22.26,},FirstTimestamp:2024-12-13 01:30:48.043038902 +0000 UTC m=+0.688623652,LastTimestamp:2024-12-13 01:30:48.043038902 +0000 UTC m=+0.688623652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.26,}" Dec 13 01:30:48.137110 kubelet[2343]: E1213 01:30:48.135545 2343 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.26.1810986c72627320 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.26,UID:172.31.22.26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.22.26,},FirstTimestamp:2024-12-13 01:30:48.080585504 +0000 UTC m=+0.726170262,LastTimestamp:2024-12-13 01:30:48.080585504 +0000 UTC m=+0.726170262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.26,}" Dec 13 01:30:48.139504 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:30:48.148389 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:30:48.151438 kubelet[2343]: E1213 01:30:48.151256 2343 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.26.1810986c733ddf60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.26,UID:172.31.22.26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.22.26 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.22.26,},FirstTimestamp:2024-12-13 01:30:48.0949656 +0000 UTC m=+0.740550344,LastTimestamp:2024-12-13 01:30:48.0949656 +0000 UTC m=+0.740550344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.26,}" Dec 13 01:30:48.158135 kubelet[2343]: E1213 01:30:48.158097 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.158453 kubelet[2343]: I1213 01:30:48.158429 2343 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:30:48.158690 kubelet[2343]: I1213 01:30:48.158624 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:30:48.158757 kubelet[2343]: I1213 01:30:48.158696 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:30:48.160587 kubelet[2343]: I1213 01:30:48.160461 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:30:48.163352 kubelet[2343]: E1213 01:30:48.163324 2343 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.26\" not found" Dec 13 01:30:48.169202 kubelet[2343]: E1213 01:30:48.169086 2343 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.26.1810986c733e0b40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.26,UID:172.31.22.26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.22.26 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.22.26,},FirstTimestamp:2024-12-13 01:30:48.094976832 +0000 UTC m=+0.740561582,LastTimestamp:2024-12-13 01:30:48.094976832 +0000 UTC m=+0.740561582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.26,}" Dec 13 01:30:48.235996 kubelet[2343]: I1213 01:30:48.235706 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:30:48.239089 kubelet[2343]: I1213 01:30:48.239065 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:30:48.239274 kubelet[2343]: I1213 01:30:48.239263 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:30:48.239397 kubelet[2343]: I1213 01:30:48.239388 2343 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:30:48.239574 kubelet[2343]: E1213 01:30:48.239528 2343 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:30:48.259495 kubelet[2343]: I1213 01:30:48.259467 2343 kubelet_node_status.go:72] "Attempting to register node" node="172.31.22.26" Dec 13 01:30:48.271805 kubelet[2343]: I1213 01:30:48.271774 2343 kubelet_node_status.go:75] "Successfully registered node" node="172.31.22.26" Dec 13 01:30:48.271978 kubelet[2343]: E1213 01:30:48.271958 2343 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.22.26\": node \"172.31.22.26\" not found" Dec 13 01:30:48.297139 kubelet[2343]: E1213 01:30:48.297093 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.397856 kubelet[2343]: E1213 01:30:48.397809 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.499205 kubelet[2343]: E1213 01:30:48.499054 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.529329 sudo[2205]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:48.552394 sshd[2202]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:48.559304 systemd[1]: sshd@6-172.31.22.26:22-139.178.68.195:44894.service: Deactivated successfully. Dec 13 01:30:48.562907 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:30:48.565117 systemd-logind[1871]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:30:48.566650 systemd-logind[1871]: Removed session 7. Dec 13 01:30:48.599925 kubelet[2343]: E1213 01:30:48.599867 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.701036 kubelet[2343]: E1213 01:30:48.700992 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.801759 kubelet[2343]: E1213 01:30:48.801632 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.902549 kubelet[2343]: E1213 01:30:48.902315 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:48.975003 kubelet[2343]: I1213 01:30:48.974956 2343 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:30:48.975193 kubelet[2343]: W1213 01:30:48.975171 2343 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:30:49.003224 kubelet[2343]: E1213 01:30:49.003171 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:49.039103 kubelet[2343]: E1213 01:30:49.039050 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:49.103687 kubelet[2343]: E1213 01:30:49.103600 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:49.204830 kubelet[2343]: E1213 01:30:49.204787 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.26\" not found" Dec 13 01:30:49.306681 kubelet[2343]: I1213 01:30:49.306527 2343 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:30:49.307267 containerd[1894]: time="2024-12-13T01:30:49.307223301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:30:49.307779 kubelet[2343]: I1213 01:30:49.307443 2343 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:30:50.040184 kubelet[2343]: I1213 01:30:50.040148 2343 apiserver.go:52] "Watching apiserver" Dec 13 01:30:50.040743 kubelet[2343]: E1213 01:30:50.040147 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:50.047973 kubelet[2343]: E1213 01:30:50.047292 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:30:50.059644 kubelet[2343]: I1213 01:30:50.059553 2343 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:30:50.064174 systemd[1]: Created slice kubepods-besteffort-poda93f3c67_361d_4e5e_a7de_646cc31da633.slice - libcontainer container kubepods-besteffort-poda93f3c67_361d_4e5e_a7de_646cc31da633.slice. Dec 13 01:30:50.075656 systemd[1]: Created slice kubepods-besteffort-pod5eab126a_0555_46b7_b523_f2c15aaf03c4.slice - libcontainer container kubepods-besteffort-pod5eab126a_0555_46b7_b523_f2c15aaf03c4.slice. Dec 13 01:30:50.081073 kubelet[2343]: I1213 01:30:50.081030 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-cni-net-dir\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081073 kubelet[2343]: I1213 01:30:50.081069 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be32ba29-5e2e-4cfe-bef0-c648c28c8dd8-varrun\") pod \"csi-node-driver-z7942\" (UID: \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\") " pod="calico-system/csi-node-driver-z7942" Dec 13 01:30:50.081329 kubelet[2343]: I1213 01:30:50.081091 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be32ba29-5e2e-4cfe-bef0-c648c28c8dd8-kubelet-dir\") pod \"csi-node-driver-z7942\" (UID: \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\") " pod="calico-system/csi-node-driver-z7942" Dec 13 01:30:50.081329 kubelet[2343]: I1213 01:30:50.081115 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a93f3c67-361d-4e5e-a7de-646cc31da633-kube-proxy\") pod \"kube-proxy-5jcrc\" (UID: \"a93f3c67-361d-4e5e-a7de-646cc31da633\") " pod="kube-system/kube-proxy-5jcrc" Dec 13 01:30:50.081329 kubelet[2343]: I1213 01:30:50.081138 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eab126a-0555-46b7-b523-f2c15aaf03c4-tigera-ca-bundle\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081329 kubelet[2343]: I1213 01:30:50.081228 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5eab126a-0555-46b7-b523-f2c15aaf03c4-node-certs\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081329 kubelet[2343]: I1213 01:30:50.081248 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-cni-log-dir\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081535 kubelet[2343]: I1213 01:30:50.081269 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7wh\" (UniqueName: \"kubernetes.io/projected/be32ba29-5e2e-4cfe-bef0-c648c28c8dd8-kube-api-access-tv7wh\") pod \"csi-node-driver-z7942\" (UID: \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\") " pod="calico-system/csi-node-driver-z7942" Dec 13 01:30:50.081535 kubelet[2343]: I1213 01:30:50.081296 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a93f3c67-361d-4e5e-a7de-646cc31da633-lib-modules\") pod \"kube-proxy-5jcrc\" (UID: \"a93f3c67-361d-4e5e-a7de-646cc31da633\") " pod="kube-system/kube-proxy-5jcrc" Dec 13 01:30:50.081535 kubelet[2343]: I1213 01:30:50.081322 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-var-run-calico\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081535 kubelet[2343]: I1213 01:30:50.081346 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-var-lib-calico\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081535 kubelet[2343]: I1213 01:30:50.081370 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-flexvol-driver-host\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081848 kubelet[2343]: I1213 01:30:50.081394 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be32ba29-5e2e-4cfe-bef0-c648c28c8dd8-socket-dir\") pod \"csi-node-driver-z7942\" (UID: \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\") " pod="calico-system/csi-node-driver-z7942" Dec 13 01:30:50.081848 kubelet[2343]: I1213 01:30:50.081417 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be32ba29-5e2e-4cfe-bef0-c648c28c8dd8-registration-dir\") pod \"csi-node-driver-z7942\" (UID: \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\") " pod="calico-system/csi-node-driver-z7942" Dec 13 01:30:50.081848 kubelet[2343]: I1213 01:30:50.081442 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhl9\" (UniqueName: \"kubernetes.io/projected/a93f3c67-361d-4e5e-a7de-646cc31da633-kube-api-access-hjhl9\") pod \"kube-proxy-5jcrc\" (UID: \"a93f3c67-361d-4e5e-a7de-646cc31da633\") " pod="kube-system/kube-proxy-5jcrc" Dec 13 01:30:50.081848 kubelet[2343]: I1213 01:30:50.081464 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-lib-modules\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.081848 kubelet[2343]: I1213 01:30:50.081487 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-xtables-lock\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.082599 kubelet[2343]: I1213 01:30:50.081517 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4p9t\" (UniqueName: \"kubernetes.io/projected/5eab126a-0555-46b7-b523-f2c15aaf03c4-kube-api-access-n4p9t\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.082599 kubelet[2343]: I1213 01:30:50.081542 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a93f3c67-361d-4e5e-a7de-646cc31da633-xtables-lock\") pod \"kube-proxy-5jcrc\" (UID: \"a93f3c67-361d-4e5e-a7de-646cc31da633\") " pod="kube-system/kube-proxy-5jcrc" Dec 13 01:30:50.082599 kubelet[2343]: I1213 01:30:50.081641 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-policysync\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.082599 kubelet[2343]: I1213 01:30:50.081671 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5eab126a-0555-46b7-b523-f2c15aaf03c4-cni-bin-dir\") pod \"calico-node-rxh4k\" (UID: \"5eab126a-0555-46b7-b523-f2c15aaf03c4\") " pod="calico-system/calico-node-rxh4k" Dec 13 01:30:50.186085 kubelet[2343]: E1213 01:30:50.186059 2343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:50.186278 kubelet[2343]: W1213 01:30:50.186214 2343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:50.186278 kubelet[2343]: E1213 01:30:50.186243 2343 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:50.195485 kubelet[2343]: E1213 01:30:50.195307 2343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:50.195485 kubelet[2343]: W1213 01:30:50.195336 2343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:50.195485 kubelet[2343]: E1213 01:30:50.195364 2343 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:50.252842 kubelet[2343]: E1213 01:30:50.252797 2343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:50.253215 kubelet[2343]: W1213 01:30:50.252823 2343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:50.253215 kubelet[2343]: E1213 01:30:50.253034 2343 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:50.265420 kubelet[2343]: E1213 01:30:50.265321 2343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:50.265420 kubelet[2343]: W1213 01:30:50.265345 2343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:50.265420 kubelet[2343]: E1213 01:30:50.265370 2343 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:50.274329 kubelet[2343]: E1213 01:30:50.274294 2343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:50.274554 kubelet[2343]: W1213 01:30:50.274326 2343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:50.274554 kubelet[2343]: E1213 01:30:50.274432 2343 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:50.373810 containerd[1894]: time="2024-12-13T01:30:50.373759890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcrc,Uid:a93f3c67-361d-4e5e-a7de-646cc31da633,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:50.419754 containerd[1894]: time="2024-12-13T01:30:50.419458908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxh4k,Uid:5eab126a-0555-46b7-b523-f2c15aaf03c4,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:51.041501 kubelet[2343]: E1213 01:30:51.041460 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:51.190596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157121341.mount: Deactivated successfully. Dec 13 01:30:51.198396 containerd[1894]: time="2024-12-13T01:30:51.198344982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:51.200141 containerd[1894]: time="2024-12-13T01:30:51.200095410Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:51.201640 containerd[1894]: time="2024-12-13T01:30:51.201446678Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:30:51.203170 containerd[1894]: time="2024-12-13T01:30:51.203093612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:30:51.205970 containerd[1894]: time="2024-12-13T01:30:51.204633158Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:51.209028 containerd[1894]: time="2024-12-13T01:30:51.208855976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:51.210026 containerd[1894]: time="2024-12-13T01:30:51.209915807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 790.364393ms" Dec 13 01:30:51.213817 containerd[1894]: time="2024-12-13T01:30:51.213772815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 839.934296ms" Dec 13 01:30:51.770251 containerd[1894]: time="2024-12-13T01:30:51.770128404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:51.770251 containerd[1894]: time="2024-12-13T01:30:51.770193896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:51.770251 containerd[1894]: time="2024-12-13T01:30:51.770216265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:51.770870 containerd[1894]: time="2024-12-13T01:30:51.770320386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:51.772168 containerd[1894]: time="2024-12-13T01:30:51.771981748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:51.772168 containerd[1894]: time="2024-12-13T01:30:51.772047208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:51.772168 containerd[1894]: time="2024-12-13T01:30:51.772064930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:51.776723 containerd[1894]: time="2024-12-13T01:30:51.773045870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:52.042616 kubelet[2343]: E1213 01:30:52.042498 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:52.165219 systemd[1]: Started cri-containerd-287d0abd6e2141de3ec283c1d2394200e6d919d2bdfc7a0c3ead49d2a9ac2daa.scope - libcontainer container 287d0abd6e2141de3ec283c1d2394200e6d919d2bdfc7a0c3ead49d2a9ac2daa. Dec 13 01:30:52.167165 systemd[1]: Started cri-containerd-5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04.scope - libcontainer container 5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04. Dec 13 01:30:52.220209 containerd[1894]: time="2024-12-13T01:30:52.219925829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxh4k,Uid:5eab126a-0555-46b7-b523-f2c15aaf03c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\"" Dec 13 01:30:52.224958 containerd[1894]: time="2024-12-13T01:30:52.223456184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcrc,Uid:a93f3c67-361d-4e5e-a7de-646cc31da633,Namespace:kube-system,Attempt:0,} returns sandbox id \"287d0abd6e2141de3ec283c1d2394200e6d919d2bdfc7a0c3ead49d2a9ac2daa\"" Dec 13 01:30:52.224958 containerd[1894]: time="2024-12-13T01:30:52.224046396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:30:52.241158 kubelet[2343]: E1213 01:30:52.241106 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:30:53.043438 kubelet[2343]: E1213 01:30:53.043385 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:53.622649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2550056663.mount: Deactivated successfully. Dec 13 01:30:53.848152 containerd[1894]: time="2024-12-13T01:30:53.848095654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:53.849727 containerd[1894]: time="2024-12-13T01:30:53.849665492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:30:53.851882 containerd[1894]: time="2024-12-13T01:30:53.851599119Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:53.854754 containerd[1894]: time="2024-12-13T01:30:53.854717497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:53.855439 containerd[1894]: time="2024-12-13T01:30:53.855397049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.631312167s" Dec 13 01:30:53.855566 containerd[1894]: time="2024-12-13T01:30:53.855542831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:30:53.857424 containerd[1894]: time="2024-12-13T01:30:53.857395260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:30:53.858592 containerd[1894]: time="2024-12-13T01:30:53.858560890Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:30:53.878491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843940211.mount: Deactivated successfully. Dec 13 01:30:53.883769 containerd[1894]: time="2024-12-13T01:30:53.883724203Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c\"" Dec 13 01:30:53.886960 containerd[1894]: time="2024-12-13T01:30:53.884618720Z" level=info msg="StartContainer for \"9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c\"" Dec 13 01:30:53.920158 systemd[1]: Started cri-containerd-9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c.scope - libcontainer container 9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c. Dec 13 01:30:53.994446 containerd[1894]: time="2024-12-13T01:30:53.994347813Z" level=info msg="StartContainer for \"9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c\" returns successfully" Dec 13 01:30:54.003702 systemd[1]: cri-containerd-9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c.scope: Deactivated successfully. Dec 13 01:30:54.044265 kubelet[2343]: E1213 01:30:54.044226 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:54.120705 containerd[1894]: time="2024-12-13T01:30:54.120630944Z" level=info msg="shim disconnected" id=9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c namespace=k8s.io Dec 13 01:30:54.120971 containerd[1894]: time="2024-12-13T01:30:54.120733723Z" level=warning msg="cleaning up after shim disconnected" id=9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c namespace=k8s.io Dec 13 01:30:54.120971 containerd[1894]: time="2024-12-13T01:30:54.120746867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:54.240604 kubelet[2343]: E1213 01:30:54.240472 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:30:54.595989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fa068490c627a61dec3fa9d9998d2ef0d8c69397fd72c8a272758ca08ad8a4c-rootfs.mount: Deactivated successfully. Dec 13 01:30:55.048003 kubelet[2343]: E1213 01:30:55.046360 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:55.543257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573624871.mount: Deactivated successfully. Dec 13 01:30:56.048125 kubelet[2343]: E1213 01:30:56.048087 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:56.241585 kubelet[2343]: E1213 01:30:56.240899 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:30:56.330343 containerd[1894]: time="2024-12-13T01:30:56.330291478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:56.332237 containerd[1894]: time="2024-12-13T01:30:56.331895646Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:30:56.334075 containerd[1894]: time="2024-12-13T01:30:56.334034808Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:56.346721 containerd[1894]: time="2024-12-13T01:30:56.346215956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:56.347872 containerd[1894]: time="2024-12-13T01:30:56.347769766Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.490290104s" Dec 13 01:30:56.348024 containerd[1894]: time="2024-12-13T01:30:56.347877385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:30:56.349774 containerd[1894]: time="2024-12-13T01:30:56.349745668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:30:56.351211 containerd[1894]: time="2024-12-13T01:30:56.351178383Z" level=info msg="CreateContainer within sandbox \"287d0abd6e2141de3ec283c1d2394200e6d919d2bdfc7a0c3ead49d2a9ac2daa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:30:56.388565 containerd[1894]: time="2024-12-13T01:30:56.388514053Z" level=info msg="CreateContainer within sandbox \"287d0abd6e2141de3ec283c1d2394200e6d919d2bdfc7a0c3ead49d2a9ac2daa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"262f91463ae612876f6c3fc53fa6745483ac91e2350ae3f7bb67b28e7af4d82c\"" Dec 13 01:30:56.392018 containerd[1894]: time="2024-12-13T01:30:56.391944798Z" level=info msg="StartContainer for \"262f91463ae612876f6c3fc53fa6745483ac91e2350ae3f7bb67b28e7af4d82c\"" Dec 13 01:30:56.438192 systemd[1]: Started cri-containerd-262f91463ae612876f6c3fc53fa6745483ac91e2350ae3f7bb67b28e7af4d82c.scope - libcontainer container 262f91463ae612876f6c3fc53fa6745483ac91e2350ae3f7bb67b28e7af4d82c. Dec 13 01:30:56.476871 containerd[1894]: time="2024-12-13T01:30:56.476823386Z" level=info msg="StartContainer for \"262f91463ae612876f6c3fc53fa6745483ac91e2350ae3f7bb67b28e7af4d82c\" returns successfully" Dec 13 01:30:57.049619 kubelet[2343]: E1213 01:30:57.049561 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:57.297645 kubelet[2343]: I1213 01:30:57.297585 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jcrc" podStartSLOduration=5.174581434 podStartE2EDuration="9.297561496s" podCreationTimestamp="2024-12-13 01:30:48 +0000 UTC" firstStartedPulling="2024-12-13 01:30:52.226598719 +0000 UTC m=+4.872183457" lastFinishedPulling="2024-12-13 01:30:56.349578769 +0000 UTC m=+8.995163519" observedRunningTime="2024-12-13 01:30:57.295304988 +0000 UTC m=+9.940889745" watchObservedRunningTime="2024-12-13 01:30:57.297561496 +0000 UTC m=+9.943146249" Dec 13 01:30:58.056562 kubelet[2343]: E1213 01:30:58.056490 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:58.240606 kubelet[2343]: E1213 01:30:58.240210 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:30:59.056713 kubelet[2343]: E1213 01:30:59.056638 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:00.064139 kubelet[2343]: E1213 01:31:00.064087 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:00.241637 kubelet[2343]: E1213 01:31:00.240641 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:31:01.065003 kubelet[2343]: E1213 01:31:01.064648 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:01.175845 containerd[1894]: time="2024-12-13T01:31:01.175791402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.177140 containerd[1894]: time="2024-12-13T01:31:01.177080518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:31:01.179497 containerd[1894]: time="2024-12-13T01:31:01.179315398Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.182525 containerd[1894]: time="2024-12-13T01:31:01.182460937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.186801 containerd[1894]: time="2024-12-13T01:31:01.183169483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.832966607s" Dec 13 01:31:01.186801 containerd[1894]: time="2024-12-13T01:31:01.183212203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:31:01.190649 containerd[1894]: time="2024-12-13T01:31:01.190611444Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:31:01.224210 containerd[1894]: time="2024-12-13T01:31:01.224155362Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0\"" Dec 13 01:31:01.225054 containerd[1894]: time="2024-12-13T01:31:01.225002016Z" level=info msg="StartContainer for \"059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0\"" Dec 13 01:31:01.295511 systemd[1]: run-containerd-runc-k8s.io-059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0-runc.3AQKKB.mount: Deactivated successfully. Dec 13 01:31:01.328476 systemd[1]: Started cri-containerd-059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0.scope - libcontainer container 059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0. Dec 13 01:31:01.397627 containerd[1894]: time="2024-12-13T01:31:01.397574691Z" level=info msg="StartContainer for \"059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0\" returns successfully" Dec 13 01:31:02.065559 kubelet[2343]: E1213 01:31:02.065383 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:02.252831 kubelet[2343]: E1213 01:31:02.248681 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:31:03.065688 kubelet[2343]: E1213 01:31:03.065640 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:03.276580 containerd[1894]: time="2024-12-13T01:31:03.276523996Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:03.279988 systemd[1]: cri-containerd-059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0.scope: Deactivated successfully. Dec 13 01:31:03.326581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0-rootfs.mount: Deactivated successfully. Dec 13 01:31:03.349095 kubelet[2343]: I1213 01:31:03.348943 2343 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:31:03.425275 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:31:03.944514 containerd[1894]: time="2024-12-13T01:31:03.944435863Z" level=info msg="shim disconnected" id=059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0 namespace=k8s.io Dec 13 01:31:03.944514 containerd[1894]: time="2024-12-13T01:31:03.944500574Z" level=warning msg="cleaning up after shim disconnected" id=059ddf5e9735915c784e1b69ab7d6d6d902ca03666c3c95a509848b503b90de0 namespace=k8s.io Dec 13 01:31:03.944514 containerd[1894]: time="2024-12-13T01:31:03.944514075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:04.066291 kubelet[2343]: E1213 01:31:04.066231 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:04.261007 systemd[1]: Created slice kubepods-besteffort-podbe32ba29_5e2e_4cfe_bef0_c648c28c8dd8.slice - libcontainer container kubepods-besteffort-podbe32ba29_5e2e_4cfe_bef0_c648c28c8dd8.slice. Dec 13 01:31:04.267162 containerd[1894]: time="2024-12-13T01:31:04.267109229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7942,Uid:be32ba29-5e2e-4cfe-bef0-c648c28c8dd8,Namespace:calico-system,Attempt:0,}" Dec 13 01:31:04.377514 containerd[1894]: time="2024-12-13T01:31:04.377455337Z" level=error msg="Failed to destroy network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:04.378471 containerd[1894]: time="2024-12-13T01:31:04.377890305Z" level=error msg="encountered an error cleaning up failed sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:04.378471 containerd[1894]: time="2024-12-13T01:31:04.377980688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7942,Uid:be32ba29-5e2e-4cfe-bef0-c648c28c8dd8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:04.380134 kubelet[2343]: E1213 01:31:04.380093 2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:04.380309 kubelet[2343]: E1213 01:31:04.380178 2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7942" Dec 13 01:31:04.380309 kubelet[2343]: E1213 01:31:04.380206 2343 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7942" Dec 13 01:31:04.380309 kubelet[2343]: E1213 01:31:04.380270 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7942_calico-system(be32ba29-5e2e-4cfe-bef0-c648c28c8dd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7942_calico-system(be32ba29-5e2e-4cfe-bef0-c648c28c8dd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:31:04.380720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568-shm.mount: Deactivated successfully. Dec 13 01:31:04.417745 containerd[1894]: time="2024-12-13T01:31:04.417707608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:31:04.418668 kubelet[2343]: I1213 01:31:04.418631 2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:04.419365 containerd[1894]: time="2024-12-13T01:31:04.419321325Z" level=info msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" Dec 13 01:31:04.419626 containerd[1894]: time="2024-12-13T01:31:04.419589583Z" level=info msg="Ensure that sandbox d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568 in task-service has been cleanup successfully" Dec 13 01:31:04.494160 containerd[1894]: time="2024-12-13T01:31:04.494092141Z" level=error msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" failed" error="failed to destroy network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:04.494440 kubelet[2343]: E1213 01:31:04.494330 2343 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:04.494440 kubelet[2343]: E1213 01:31:04.494381 2343 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568"} Dec 13 01:31:04.494440 kubelet[2343]: E1213 01:31:04.494443 2343 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:31:04.494655 kubelet[2343]: E1213 01:31:04.494466 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7942" podUID="be32ba29-5e2e-4cfe-bef0-c648c28c8dd8" Dec 13 01:31:04.793143 systemd[1]: Created slice kubepods-besteffort-pod232f4630_aec0_4a84_977d_26cdc5c1e9a5.slice - libcontainer container kubepods-besteffort-pod232f4630_aec0_4a84_977d_26cdc5c1e9a5.slice. Dec 13 01:31:04.920008 kubelet[2343]: I1213 01:31:04.919958 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgm4\" (UniqueName: \"kubernetes.io/projected/232f4630-aec0-4a84-977d-26cdc5c1e9a5-kube-api-access-qlgm4\") pod \"nginx-deployment-8587fbcb89-wcwct\" (UID: \"232f4630-aec0-4a84-977d-26cdc5c1e9a5\") " pod="default/nginx-deployment-8587fbcb89-wcwct" Dec 13 01:31:05.067357 kubelet[2343]: E1213 01:31:05.067319 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:05.103494 containerd[1894]: time="2024-12-13T01:31:05.102639210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-wcwct,Uid:232f4630-aec0-4a84-977d-26cdc5c1e9a5,Namespace:default,Attempt:0,}" Dec 13 01:31:05.206822 containerd[1894]: time="2024-12-13T01:31:05.206758098Z" level=error msg="Failed to destroy network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:05.207504 containerd[1894]: time="2024-12-13T01:31:05.207185181Z" level=error msg="encountered an error cleaning up failed sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:05.207504 containerd[1894]: time="2024-12-13T01:31:05.207250879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-wcwct,Uid:232f4630-aec0-4a84-977d-26cdc5c1e9a5,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:05.209011 kubelet[2343]: E1213 01:31:05.207509 2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:05.209011 kubelet[2343]: E1213 01:31:05.207579 2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-wcwct" Dec 13 01:31:05.209011 kubelet[2343]: E1213 01:31:05.207606 2343 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-wcwct" Dec 13 01:31:05.209232 kubelet[2343]: E1213 01:31:05.207662 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-wcwct_default(232f4630-aec0-4a84-977d-26cdc5c1e9a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-wcwct_default(232f4630-aec0-4a84-977d-26cdc5c1e9a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-wcwct" podUID="232f4630-aec0-4a84-977d-26cdc5c1e9a5" Dec 13 01:31:05.210008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64-shm.mount: Deactivated successfully. Dec 13 01:31:05.422808 kubelet[2343]: I1213 01:31:05.422114 2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:05.423052 containerd[1894]: time="2024-12-13T01:31:05.422983846Z" level=info msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" Dec 13 01:31:05.423436 containerd[1894]: time="2024-12-13T01:31:05.423218047Z" level=info msg="Ensure that sandbox 4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64 in task-service has been cleanup successfully" Dec 13 01:31:05.479999 containerd[1894]: time="2024-12-13T01:31:05.479256183Z" level=error msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" failed" error="failed to destroy network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:31:05.481278 kubelet[2343]: E1213 01:31:05.481108 2343 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:05.481278 kubelet[2343]: E1213 01:31:05.481167 2343 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64"} Dec 13 01:31:05.481278 kubelet[2343]: E1213 01:31:05.481211 2343 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"232f4630-aec0-4a84-977d-26cdc5c1e9a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:31:05.481278 kubelet[2343]: E1213 01:31:05.481243 2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"232f4630-aec0-4a84-977d-26cdc5c1e9a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-wcwct" podUID="232f4630-aec0-4a84-977d-26cdc5c1e9a5" Dec 13 01:31:06.070731 kubelet[2343]: E1213 01:31:06.070018 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:07.071076 kubelet[2343]: E1213 01:31:07.071031 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:08.033273 kubelet[2343]: E1213 01:31:08.032808 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:08.071902 kubelet[2343]: E1213 01:31:08.071864 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:09.072670 kubelet[2343]: E1213 01:31:09.072629 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:10.073715 kubelet[2343]: E1213 01:31:10.073593 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:11.073999 kubelet[2343]: E1213 01:31:11.073958 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:12.051653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332898513.mount: Deactivated successfully. Dec 13 01:31:12.075534 kubelet[2343]: E1213 01:31:12.075458 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:12.117928 containerd[1894]: time="2024-12-13T01:31:12.117860742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.119710 containerd[1894]: time="2024-12-13T01:31:12.119466260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:31:12.122993 containerd[1894]: time="2024-12-13T01:31:12.122011858Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.125231 containerd[1894]: time="2024-12-13T01:31:12.125086958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:12.126193 containerd[1894]: time="2024-12-13T01:31:12.125778488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.707850584s" Dec 13 01:31:12.126193 containerd[1894]: time="2024-12-13T01:31:12.125823282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:31:12.160659 containerd[1894]: time="2024-12-13T01:31:12.160527819Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:31:12.186479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996848118.mount: Deactivated successfully. Dec 13 01:31:12.197639 containerd[1894]: time="2024-12-13T01:31:12.197586798Z" level=info msg="CreateContainer within sandbox \"5d6f6dc549089e85fbea6094b1753785d625eb64aa4648844b92b4fb02c2ca04\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939\"" Dec 13 01:31:12.198258 containerd[1894]: time="2024-12-13T01:31:12.198219856Z" level=info msg="StartContainer for \"0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939\"" Dec 13 01:31:12.343204 systemd[1]: Started cri-containerd-0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939.scope - libcontainer container 0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939. Dec 13 01:31:12.395980 containerd[1894]: time="2024-12-13T01:31:12.394618931Z" level=info msg="StartContainer for \"0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939\" returns successfully" Dec 13 01:31:12.517961 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:31:12.518114 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:31:13.075678 kubelet[2343]: E1213 01:31:13.075608 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:14.076956 kubelet[2343]: E1213 01:31:14.076069 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:14.369965 kernel: bpftool[3128]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:31:14.634047 (udev-worker)[2943]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:14.635391 systemd-networkd[1713]: vxlan.calico: Link UP Dec 13 01:31:14.635397 systemd-networkd[1713]: vxlan.calico: Gained carrier Dec 13 01:31:14.675894 (udev-worker)[3151]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:15.076353 kubelet[2343]: E1213 01:31:15.076291 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:15.860107 systemd-networkd[1713]: vxlan.calico: Gained IPv6LL Dec 13 01:31:16.077478 kubelet[2343]: E1213 01:31:16.077415 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:17.078102 kubelet[2343]: E1213 01:31:17.078049 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:17.242187 containerd[1894]: time="2024-12-13T01:31:17.241929659Z" level=info msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" Dec 13 01:31:17.448438 kubelet[2343]: I1213 01:31:17.448285 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rxh4k" podStartSLOduration=9.544875688 podStartE2EDuration="29.44825974s" podCreationTimestamp="2024-12-13 01:30:48 +0000 UTC" firstStartedPulling="2024-12-13 01:30:52.223520542 +0000 UTC m=+4.869105284" lastFinishedPulling="2024-12-13 01:31:12.126904598 +0000 UTC m=+24.772489336" observedRunningTime="2024-12-13 01:31:12.484880964 +0000 UTC m=+25.130465721" watchObservedRunningTime="2024-12-13 01:31:17.44825974 +0000 UTC m=+30.093844587" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.452 [INFO][3213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.455 [INFO][3213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" iface="eth0" netns="/var/run/netns/cni-a60412c9-c4e1-5ed1-5acd-224ede64662b" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.457 [INFO][3213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" iface="eth0" netns="/var/run/netns/cni-a60412c9-c4e1-5ed1-5acd-224ede64662b" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.459 [INFO][3213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" iface="eth0" netns="/var/run/netns/cni-a60412c9-c4e1-5ed1-5acd-224ede64662b" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.459 [INFO][3213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.459 [INFO][3213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.573 [INFO][3219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.573 [INFO][3219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.574 [INFO][3219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.591 [WARNING][3219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.591 [INFO][3219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.596 [INFO][3219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:17.601361 containerd[1894]: 2024-12-13 01:31:17.599 [INFO][3213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:17.605106 containerd[1894]: time="2024-12-13T01:31:17.601515465Z" level=info msg="TearDown network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" successfully" Dec 13 01:31:17.605106 containerd[1894]: time="2024-12-13T01:31:17.601547404Z" level=info msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" returns successfully" Dec 13 01:31:17.605767 containerd[1894]: time="2024-12-13T01:31:17.605581598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7942,Uid:be32ba29-5e2e-4cfe-bef0-c648c28c8dd8,Namespace:calico-system,Attempt:1,}" Dec 13 01:31:17.606612 systemd[1]: run-netns-cni\x2da60412c9\x2dc4e1\x2d5ed1\x2d5acd\x2d224ede64662b.mount: Deactivated successfully. Dec 13 01:31:17.863792 systemd-networkd[1713]: cali7d4b21e966b: Link UP Dec 13 01:31:17.870035 systemd-networkd[1713]: cali7d4b21e966b: Gained carrier Dec 13 01:31:17.870716 (udev-worker)[3165]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.706 [INFO][3226] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.26-k8s-csi--node--driver--z7942-eth0 csi-node-driver- calico-system be32ba29-5e2e-4cfe-bef0-c648c28c8dd8 1032 0 2024-12-13 01:30:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.22.26 csi-node-driver-z7942 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7d4b21e966b [] []}} ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.706 [INFO][3226] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.761 [INFO][3236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" HandleID="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.784 [INFO][3236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" HandleID="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cf9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.22.26", "pod":"csi-node-driver-z7942", "timestamp":"2024-12-13 01:31:17.761956052 +0000 UTC"}, Hostname:"172.31.22.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.785 [INFO][3236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.785 [INFO][3236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.785 [INFO][3236] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.26' Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.788 [INFO][3236] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.799 [INFO][3236] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.805 [INFO][3236] ipam/ipam.go 489: Trying affinity for 192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.809 [INFO][3236] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.813 [INFO][3236] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.813 [INFO][3236] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.817 [INFO][3236] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.823 [INFO][3236] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.841 [INFO][3236] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.1/26] block=192.168.34.0/26 handle="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.841 [INFO][3236] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.1/26] handle="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" host="172.31.22.26" Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.841 [INFO][3236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:17.908889 containerd[1894]: 2024-12-13 01:31:17.842 [INFO][3236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.1/26] IPv6=[] ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" HandleID="k8s-pod-network.490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.850 [INFO][3226] cni-plugin/k8s.go 386: Populated endpoint ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-csi--node--driver--z7942-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"", Pod:"csi-node-driver-z7942", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b21e966b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.854 [INFO][3226] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.1/32] ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.855 [INFO][3226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d4b21e966b ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.869 [INFO][3226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.872 [INFO][3226] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-csi--node--driver--z7942-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b", Pod:"csi-node-driver-z7942", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b21e966b", MAC:"fa:16:e7:af:28:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:17.911141 containerd[1894]: 2024-12-13 01:31:17.902 [INFO][3226] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b" Namespace="calico-system" Pod="csi-node-driver-z7942" WorkloadEndpoint="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:17.939024 containerd[1894]: time="2024-12-13T01:31:17.938713155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:17.939024 containerd[1894]: time="2024-12-13T01:31:17.938807810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:17.939024 containerd[1894]: time="2024-12-13T01:31:17.938833536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:17.939024 containerd[1894]: time="2024-12-13T01:31:17.938973770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:17.956092 update_engine[1872]: I20241213 01:31:17.956017 1872 update_attempter.cc:509] Updating boot flags... Dec 13 01:31:17.992131 systemd[1]: Started cri-containerd-490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b.scope - libcontainer container 490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b. Dec 13 01:31:18.038961 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3151) Dec 13 01:31:18.061323 containerd[1894]: time="2024-12-13T01:31:18.061277561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7942,Uid:be32ba29-5e2e-4cfe-bef0-c648c28c8dd8,Namespace:calico-system,Attempt:1,} returns sandbox id \"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b\"" Dec 13 01:31:18.064700 containerd[1894]: time="2024-12-13T01:31:18.064582715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:31:18.079304 kubelet[2343]: E1213 01:31:18.079187 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:18.252146 containerd[1894]: time="2024-12-13T01:31:18.250418336Z" level=info msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.328 [INFO][3401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.329 [INFO][3401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" iface="eth0" netns="/var/run/netns/cni-170557a1-c96b-051d-cd65-5e171d641982" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.329 [INFO][3401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" iface="eth0" netns="/var/run/netns/cni-170557a1-c96b-051d-cd65-5e171d641982" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.329 [INFO][3401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" iface="eth0" netns="/var/run/netns/cni-170557a1-c96b-051d-cd65-5e171d641982" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.329 [INFO][3401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.329 [INFO][3401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.375 [INFO][3407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.375 [INFO][3407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.375 [INFO][3407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.393 [WARNING][3407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.393 [INFO][3407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.397 [INFO][3407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:18.401472 containerd[1894]: 2024-12-13 01:31:18.400 [INFO][3401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:18.403195 containerd[1894]: time="2024-12-13T01:31:18.403030654Z" level=info msg="TearDown network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" successfully" Dec 13 01:31:18.403195 containerd[1894]: time="2024-12-13T01:31:18.403072155Z" level=info msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" returns successfully" Dec 13 01:31:18.404586 containerd[1894]: time="2024-12-13T01:31:18.404552390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-wcwct,Uid:232f4630-aec0-4a84-977d-26cdc5c1e9a5,Namespace:default,Attempt:1,}" Dec 13 01:31:18.608714 systemd[1]: run-netns-cni\x2d170557a1\x2dc96b\x2d051d\x2dcd65\x2d5e171d641982.mount: Deactivated successfully. Dec 13 01:31:18.679466 systemd-networkd[1713]: cali9f6506c4951: Link UP Dec 13 01:31:18.684758 systemd-networkd[1713]: cali9f6506c4951: Gained carrier Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.500 [INFO][3413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0 nginx-deployment-8587fbcb89- default 232f4630-aec0-4a84-977d-26cdc5c1e9a5 1041 0 2024-12-13 01:31:04 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.26 nginx-deployment-8587fbcb89-wcwct eth0 default [] [] [kns.default ksa.default.default] cali9f6506c4951 [] []}} ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.500 [INFO][3413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.549 [INFO][3425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" HandleID="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.579 [INFO][3425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" HandleID="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291a40), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.26", "pod":"nginx-deployment-8587fbcb89-wcwct", "timestamp":"2024-12-13 01:31:18.549402049 +0000 UTC"}, Hostname:"172.31.22.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.584 [INFO][3425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.584 [INFO][3425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.585 [INFO][3425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.26' Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.591 [INFO][3425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.599 [INFO][3425] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.611 [INFO][3425] ipam/ipam.go 489: Trying affinity for 192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.614 [INFO][3425] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.635 [INFO][3425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.635 [INFO][3425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.640 [INFO][3425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417 Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.651 [INFO][3425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.668 [INFO][3425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.2/26] block=192.168.34.0/26 handle="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.668 [INFO][3425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.2/26] handle="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" host="172.31.22.26" Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.668 [INFO][3425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:18.715642 containerd[1894]: 2024-12-13 01:31:18.668 [INFO][3425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.2/26] IPv6=[] ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" HandleID="k8s-pod-network.1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.671 [INFO][3413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"232f4630-aec0-4a84-977d-26cdc5c1e9a5", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-wcwct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9f6506c4951", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.671 [INFO][3413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.2/32] ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.671 [INFO][3413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f6506c4951 ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.682 [INFO][3413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.688 [INFO][3413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"232f4630-aec0-4a84-977d-26cdc5c1e9a5", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417", Pod:"nginx-deployment-8587fbcb89-wcwct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9f6506c4951", MAC:"86:ee:69:8e:a7:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:18.717598 containerd[1894]: 2024-12-13 01:31:18.712 [INFO][3413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417" Namespace="default" Pod="nginx-deployment-8587fbcb89-wcwct" WorkloadEndpoint="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:18.752764 containerd[1894]: time="2024-12-13T01:31:18.752630026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:18.752764 containerd[1894]: time="2024-12-13T01:31:18.752698829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:18.752764 containerd[1894]: time="2024-12-13T01:31:18.752715295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:18.753198 containerd[1894]: time="2024-12-13T01:31:18.752829228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:18.787850 systemd[1]: run-containerd-runc-k8s.io-1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417-runc.CTlbZy.mount: Deactivated successfully. Dec 13 01:31:18.803351 systemd[1]: Started cri-containerd-1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417.scope - libcontainer container 1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417. Dec 13 01:31:18.882690 containerd[1894]: time="2024-12-13T01:31:18.882524887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-wcwct,Uid:232f4630-aec0-4a84-977d-26cdc5c1e9a5,Namespace:default,Attempt:1,} returns sandbox id \"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417\"" Dec 13 01:31:19.079830 kubelet[2343]: E1213 01:31:19.079776 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:19.447663 systemd-networkd[1713]: cali7d4b21e966b: Gained IPv6LL Dec 13 01:31:19.567600 containerd[1894]: time="2024-12-13T01:31:19.567549872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:19.568849 containerd[1894]: time="2024-12-13T01:31:19.568739536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:31:19.571180 containerd[1894]: time="2024-12-13T01:31:19.570261031Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:19.573169 containerd[1894]: time="2024-12-13T01:31:19.573139923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:19.574059 containerd[1894]: time="2024-12-13T01:31:19.574025271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.509400405s" Dec 13 01:31:19.574139 containerd[1894]: time="2024-12-13T01:31:19.574066436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:31:19.576556 containerd[1894]: time="2024-12-13T01:31:19.576528787Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:31:19.584464 containerd[1894]: time="2024-12-13T01:31:19.584422352Z" level=info msg="CreateContainer within sandbox \"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:31:19.629230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154223191.mount: Deactivated successfully. Dec 13 01:31:19.635270 containerd[1894]: time="2024-12-13T01:31:19.635235721Z" level=info msg="CreateContainer within sandbox \"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f91a34c683fc805e13c057a9f65e55376e5f8a48975d4153af1f8fa93bd7efce\"" Dec 13 01:31:19.636440 containerd[1894]: time="2024-12-13T01:31:19.636358839Z" level=info msg="StartContainer for \"f91a34c683fc805e13c057a9f65e55376e5f8a48975d4153af1f8fa93bd7efce\"" Dec 13 01:31:19.686162 systemd[1]: Started cri-containerd-f91a34c683fc805e13c057a9f65e55376e5f8a48975d4153af1f8fa93bd7efce.scope - libcontainer container f91a34c683fc805e13c057a9f65e55376e5f8a48975d4153af1f8fa93bd7efce. Dec 13 01:31:19.729493 containerd[1894]: time="2024-12-13T01:31:19.728063031Z" level=info msg="StartContainer for \"f91a34c683fc805e13c057a9f65e55376e5f8a48975d4153af1f8fa93bd7efce\" returns successfully" Dec 13 01:31:20.080149 kubelet[2343]: E1213 01:31:20.080097 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:20.467330 systemd-networkd[1713]: cali9f6506c4951: Gained IPv6LL Dec 13 01:31:21.081070 kubelet[2343]: E1213 01:31:21.081029 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:22.082146 kubelet[2343]: E1213 01:31:22.082108 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:23.083059 kubelet[2343]: E1213 01:31:23.083020 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:23.383147 ntpd[1864]: Listen normally on 7 vxlan.calico 192.168.34.0:123 Dec 13 01:31:23.385654 ntpd[1864]: 13 Dec 01:31:23 ntpd[1864]: Listen normally on 7 vxlan.calico 192.168.34.0:123 Dec 13 01:31:23.385654 ntpd[1864]: 13 Dec 01:31:23 ntpd[1864]: Listen normally on 8 vxlan.calico [fe80::64d5:2ff:fe6a:7b4a%3]:123 Dec 13 01:31:23.385654 ntpd[1864]: 13 Dec 01:31:23 ntpd[1864]: Listen normally on 9 cali7d4b21e966b [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:31:23.385654 ntpd[1864]: 13 Dec 01:31:23 ntpd[1864]: Listen normally on 10 cali9f6506c4951 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:31:23.383229 ntpd[1864]: Listen normally on 8 vxlan.calico [fe80::64d5:2ff:fe6a:7b4a%3]:123 Dec 13 01:31:23.383281 ntpd[1864]: Listen normally on 9 cali7d4b21e966b [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:31:23.383317 ntpd[1864]: Listen normally on 10 cali9f6506c4951 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:31:23.855005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873136880.mount: Deactivated successfully. Dec 13 01:31:24.084100 kubelet[2343]: E1213 01:31:24.083859 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:25.084815 kubelet[2343]: E1213 01:31:25.084512 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:25.746237 containerd[1894]: time="2024-12-13T01:31:25.746106049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:25.747776 containerd[1894]: time="2024-12-13T01:31:25.747668813Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 01:31:25.749375 containerd[1894]: time="2024-12-13T01:31:25.749319037Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:25.756982 containerd[1894]: time="2024-12-13T01:31:25.754647949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:25.758765 containerd[1894]: time="2024-12-13T01:31:25.758717690Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 6.182146858s" Dec 13 01:31:25.758930 containerd[1894]: time="2024-12-13T01:31:25.758907715Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:31:25.777548 containerd[1894]: time="2024-12-13T01:31:25.777508185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:31:25.778777 containerd[1894]: time="2024-12-13T01:31:25.778740936Z" level=info msg="CreateContainer within sandbox \"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:31:25.805000 containerd[1894]: time="2024-12-13T01:31:25.804832843Z" level=info msg="CreateContainer within sandbox \"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61\"" Dec 13 01:31:25.805509 containerd[1894]: time="2024-12-13T01:31:25.805481629Z" level=info msg="StartContainer for \"e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61\"" Dec 13 01:31:25.853473 systemd[1]: run-containerd-runc-k8s.io-e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61-runc.rvDs1H.mount: Deactivated successfully. Dec 13 01:31:25.863320 systemd[1]: Started cri-containerd-e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61.scope - libcontainer container e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61. Dec 13 01:31:25.923247 containerd[1894]: time="2024-12-13T01:31:25.923076224Z" level=info msg="StartContainer for \"e72acb20c47340e064284339eb54c59c1cd614d033f28c5cf23ed5ce59eb4b61\" returns successfully" Dec 13 01:31:26.084761 kubelet[2343]: E1213 01:31:26.084694 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:27.085084 kubelet[2343]: E1213 01:31:27.084961 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:27.294422 containerd[1894]: time="2024-12-13T01:31:27.294367839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:27.296350 containerd[1894]: time="2024-12-13T01:31:27.296104598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:31:27.298028 containerd[1894]: time="2024-12-13T01:31:27.297954568Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:27.302969 containerd[1894]: time="2024-12-13T01:31:27.302861598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:27.306842 containerd[1894]: time="2024-12-13T01:31:27.306792722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.529234472s" Dec 13 01:31:27.308068 containerd[1894]: time="2024-12-13T01:31:27.306847177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:31:27.310118 containerd[1894]: time="2024-12-13T01:31:27.310083707Z" level=info msg="CreateContainer within sandbox \"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:31:27.380738 containerd[1894]: time="2024-12-13T01:31:27.380622190Z" level=info msg="CreateContainer within sandbox \"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"976111767fef8f6aa255cfe16b5fb107f2a49cf783a50b2d47495e58b2105b5a\"" Dec 13 01:31:27.381653 containerd[1894]: time="2024-12-13T01:31:27.381606397Z" level=info msg="StartContainer for \"976111767fef8f6aa255cfe16b5fb107f2a49cf783a50b2d47495e58b2105b5a\"" Dec 13 01:31:27.451185 systemd[1]: Started cri-containerd-976111767fef8f6aa255cfe16b5fb107f2a49cf783a50b2d47495e58b2105b5a.scope - libcontainer container 976111767fef8f6aa255cfe16b5fb107f2a49cf783a50b2d47495e58b2105b5a. Dec 13 01:31:27.498456 containerd[1894]: time="2024-12-13T01:31:27.498409964Z" level=info msg="StartContainer for \"976111767fef8f6aa255cfe16b5fb107f2a49cf783a50b2d47495e58b2105b5a\" returns successfully" Dec 13 01:31:27.535103 kubelet[2343]: I1213 01:31:27.535043 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-wcwct" podStartSLOduration=16.65937301 podStartE2EDuration="23.535008284s" podCreationTimestamp="2024-12-13 01:31:04 +0000 UTC" firstStartedPulling="2024-12-13 01:31:18.885102156 +0000 UTC m=+31.530686898" lastFinishedPulling="2024-12-13 01:31:25.760737424 +0000 UTC m=+38.406322172" observedRunningTime="2024-12-13 01:31:26.539599152 +0000 UTC m=+39.185183909" watchObservedRunningTime="2024-12-13 01:31:27.535008284 +0000 UTC m=+40.180593044" Dec 13 01:31:28.033875 kubelet[2343]: E1213 01:31:28.032074 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:28.048482 systemd[1]: run-containerd-runc-k8s.io-0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939-runc.5sC4Ux.mount: Deactivated successfully. Dec 13 01:31:28.086107 kubelet[2343]: E1213 01:31:28.086019 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:28.176583 kubelet[2343]: I1213 01:31:28.176513 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z7942" podStartSLOduration=30.932013622 podStartE2EDuration="40.17649062s" podCreationTimestamp="2024-12-13 01:30:48 +0000 UTC" firstStartedPulling="2024-12-13 01:31:18.064276476 +0000 UTC m=+30.709861215" lastFinishedPulling="2024-12-13 01:31:27.308753467 +0000 UTC m=+39.954338213" observedRunningTime="2024-12-13 01:31:27.535377181 +0000 UTC m=+40.180961927" watchObservedRunningTime="2024-12-13 01:31:28.17649062 +0000 UTC m=+40.822075372" Dec 13 01:31:28.239267 kubelet[2343]: I1213 01:31:28.239235 2343 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:31:28.239267 kubelet[2343]: I1213 01:31:28.239275 2343 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:31:29.086753 kubelet[2343]: E1213 01:31:29.086693 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:30.087789 kubelet[2343]: E1213 01:31:30.087734 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:31.088749 kubelet[2343]: E1213 01:31:31.088460 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:32.089470 kubelet[2343]: E1213 01:31:32.089417 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:33.090590 kubelet[2343]: E1213 01:31:33.090543 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:34.091129 kubelet[2343]: E1213 01:31:34.091071 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:35.047900 systemd[1]: Created slice kubepods-besteffort-pod32b0da96_84d8_4c42_8319_0d11d6829330.slice - libcontainer container kubepods-besteffort-pod32b0da96_84d8_4c42_8319_0d11d6829330.slice. Dec 13 01:31:35.092375 kubelet[2343]: E1213 01:31:35.092316 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:35.163205 kubelet[2343]: I1213 01:31:35.163123 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79tf\" (UniqueName: \"kubernetes.io/projected/32b0da96-84d8-4c42-8319-0d11d6829330-kube-api-access-n79tf\") pod \"nfs-server-provisioner-0\" (UID: \"32b0da96-84d8-4c42-8319-0d11d6829330\") " pod="default/nfs-server-provisioner-0" Dec 13 01:31:35.163372 kubelet[2343]: I1213 01:31:35.163243 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/32b0da96-84d8-4c42-8319-0d11d6829330-data\") pod \"nfs-server-provisioner-0\" (UID: \"32b0da96-84d8-4c42-8319-0d11d6829330\") " pod="default/nfs-server-provisioner-0" Dec 13 01:31:35.357137 containerd[1894]: time="2024-12-13T01:31:35.356991516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32b0da96-84d8-4c42-8319-0d11d6829330,Namespace:default,Attempt:0,}" Dec 13 01:31:35.644120 systemd-networkd[1713]: cali60e51b789ff: Link UP Dec 13 01:31:35.645273 systemd-networkd[1713]: cali60e51b789ff: Gained carrier Dec 13 01:31:35.650589 (udev-worker)[3716]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.481 [INFO][3698] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.26-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 32b0da96-84d8-4c42-8319-0d11d6829330 1122 0 2024-12-13 01:31:34 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.22.26 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.481 [INFO][3698] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.526 [INFO][3709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" HandleID="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Workload="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.545 [INFO][3709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" HandleID="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Workload="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051590), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.26", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 01:31:35.526321765 +0000 UTC"}, Hostname:"172.31.22.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.545 [INFO][3709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.545 [INFO][3709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.545 [INFO][3709] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.26' Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.554 [INFO][3709] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.567 [INFO][3709] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.582 [INFO][3709] ipam/ipam.go 489: Trying affinity for 192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.592 [INFO][3709] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.599 [INFO][3709] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.599 [INFO][3709] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.603 [INFO][3709] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.618 [INFO][3709] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.637 [INFO][3709] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.3/26] block=192.168.34.0/26 handle="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.637 [INFO][3709] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.3/26] handle="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" host="172.31.22.26" Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.637 [INFO][3709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:35.675403 containerd[1894]: 2024-12-13 01:31:35.637 [INFO][3709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.3/26] IPv6=[] ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" HandleID="k8s-pod-network.bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Workload="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.676905 containerd[1894]: 2024-12-13 01:31:35.639 [INFO][3698] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"32b0da96-84d8-4c42-8319-0d11d6829330", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:35.676905 containerd[1894]: 2024-12-13 01:31:35.639 [INFO][3698] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.3/32] ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.676905 containerd[1894]: 2024-12-13 01:31:35.639 [INFO][3698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.676905 containerd[1894]: 2024-12-13 01:31:35.643 [INFO][3698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.677275 containerd[1894]: 2024-12-13 01:31:35.646 [INFO][3698] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"32b0da96-84d8-4c42-8319-0d11d6829330", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"8a:5d:bd:03:60:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:35.677275 containerd[1894]: 2024-12-13 01:31:35.670 [INFO][3698] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.26-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:31:35.752578 containerd[1894]: time="2024-12-13T01:31:35.752446950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:35.752783 containerd[1894]: time="2024-12-13T01:31:35.752638894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:35.752783 containerd[1894]: time="2024-12-13T01:31:35.752752424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:35.753376 containerd[1894]: time="2024-12-13T01:31:35.753020230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:35.794161 systemd[1]: Started cri-containerd-bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff.scope - libcontainer container bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff. Dec 13 01:31:35.854627 containerd[1894]: time="2024-12-13T01:31:35.854584755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32b0da96-84d8-4c42-8319-0d11d6829330,Namespace:default,Attempt:0,} returns sandbox id \"bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff\"" Dec 13 01:31:35.864863 containerd[1894]: time="2024-12-13T01:31:35.864794268Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:31:36.092518 kubelet[2343]: E1213 01:31:36.092473 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:36.979440 systemd-networkd[1713]: cali60e51b789ff: Gained IPv6LL Dec 13 01:31:37.093623 kubelet[2343]: E1213 01:31:37.093567 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:38.094439 kubelet[2343]: E1213 01:31:38.094335 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:39.094724 kubelet[2343]: E1213 01:31:39.094651 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:39.238641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994180210.mount: Deactivated successfully. Dec 13 01:31:39.388216 ntpd[1864]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:31:39.390036 ntpd[1864]: 13 Dec 01:31:39 ntpd[1864]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:31:40.095326 kubelet[2343]: E1213 01:31:40.095259 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:41.097150 kubelet[2343]: E1213 01:31:41.097074 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:42.098026 kubelet[2343]: E1213 01:31:42.097925 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:42.357389 containerd[1894]: time="2024-12-13T01:31:42.357120538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.358965 containerd[1894]: time="2024-12-13T01:31:42.358770551Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 01:31:42.421129 containerd[1894]: time="2024-12-13T01:31:42.420960532Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.556094203s" Dec 13 01:31:42.421129 containerd[1894]: time="2024-12-13T01:31:42.421019125Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:31:42.436882 containerd[1894]: time="2024-12-13T01:31:42.436135112Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.438406 containerd[1894]: time="2024-12-13T01:31:42.438049217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.495798 containerd[1894]: time="2024-12-13T01:31:42.495747561Z" level=info msg="CreateContainer within sandbox \"bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:31:42.530833 containerd[1894]: time="2024-12-13T01:31:42.530786188Z" level=info msg="CreateContainer within sandbox \"bf0da7c40d8d72f159222c5eecd8619ef52e9e564c6434b84bc52effee6286ff\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5\"" Dec 13 01:31:42.536323 containerd[1894]: time="2024-12-13T01:31:42.536284215Z" level=info msg="StartContainer for \"c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5\"" Dec 13 01:31:42.588314 systemd[1]: run-containerd-runc-k8s.io-c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5-runc.Y8dSW2.mount: Deactivated successfully. Dec 13 01:31:42.600270 systemd[1]: Started cri-containerd-c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5.scope - libcontainer container c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5. Dec 13 01:31:42.671361 containerd[1894]: time="2024-12-13T01:31:42.671239350Z" level=info msg="StartContainer for \"c3689653f045207af9c594fedf464aace885861fbd21d880992f18f4635f46b5\" returns successfully" Dec 13 01:31:43.098609 kubelet[2343]: E1213 01:31:43.098551 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:43.786683 kubelet[2343]: I1213 01:31:43.785046 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.191480169 podStartE2EDuration="9.785021865s" podCreationTimestamp="2024-12-13 01:31:34 +0000 UTC" firstStartedPulling="2024-12-13 01:31:35.864228732 +0000 UTC m=+48.509813468" lastFinishedPulling="2024-12-13 01:31:42.457770426 +0000 UTC m=+55.103355164" observedRunningTime="2024-12-13 01:31:43.78205546 +0000 UTC m=+56.427640216" watchObservedRunningTime="2024-12-13 01:31:43.785021865 +0000 UTC m=+56.430606626" Dec 13 01:31:44.099674 kubelet[2343]: E1213 01:31:44.099611 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:45.100088 kubelet[2343]: E1213 01:31:45.100039 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:46.101169 kubelet[2343]: E1213 01:31:46.101111 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:47.102238 kubelet[2343]: E1213 01:31:47.102114 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:48.032232 kubelet[2343]: E1213 01:31:48.032180 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:48.072463 containerd[1894]: time="2024-12-13T01:31:48.072417202Z" level=info msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" Dec 13 01:31:48.105452 kubelet[2343]: E1213 01:31:48.102415 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.164 [WARNING][3877] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-csi--node--driver--z7942-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b", Pod:"csi-node-driver-z7942", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b21e966b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.164 [INFO][3877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.164 [INFO][3877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" iface="eth0" netns="" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.164 [INFO][3877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.164 [INFO][3877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.212 [INFO][3883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.212 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.212 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.224 [WARNING][3883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.224 [INFO][3883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.227 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:48.230159 containerd[1894]: 2024-12-13 01:31:48.228 [INFO][3877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.231104 containerd[1894]: time="2024-12-13T01:31:48.230196108Z" level=info msg="TearDown network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" successfully" Dec 13 01:31:48.231104 containerd[1894]: time="2024-12-13T01:31:48.230227624Z" level=info msg="StopPodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" returns successfully" Dec 13 01:31:48.239500 containerd[1894]: time="2024-12-13T01:31:48.239454939Z" level=info msg="RemovePodSandbox for \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" Dec 13 01:31:48.239500 containerd[1894]: time="2024-12-13T01:31:48.239501741Z" level=info msg="Forcibly stopping sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\"" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.304 [WARNING][3901] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-csi--node--driver--z7942-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be32ba29-5e2e-4cfe-bef0-c648c28c8dd8", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"490f45f93f63254318c9dcf56936af7e6ddf8d1f136784c2302d35da779cef9b", Pod:"csi-node-driver-z7942", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d4b21e966b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.305 [INFO][3901] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.305 [INFO][3901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" iface="eth0" netns="" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.305 [INFO][3901] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.305 [INFO][3901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.335 [INFO][3909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.338 [INFO][3909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.339 [INFO][3909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.348 [WARNING][3909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.348 [INFO][3909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" HandleID="k8s-pod-network.d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Workload="172.31.22.26-k8s-csi--node--driver--z7942-eth0" Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.353 [INFO][3909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:48.359797 containerd[1894]: 2024-12-13 01:31:48.357 [INFO][3901] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568" Dec 13 01:31:48.360944 containerd[1894]: time="2024-12-13T01:31:48.360902210Z" level=info msg="TearDown network for sandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" successfully" Dec 13 01:31:48.390363 containerd[1894]: time="2024-12-13T01:31:48.390301544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:31:48.390522 containerd[1894]: time="2024-12-13T01:31:48.390399124Z" level=info msg="RemovePodSandbox \"d68dc80197b48875a9eaca0f37e49cfb803f78d31d0b81bc120af270fd51a568\" returns successfully" Dec 13 01:31:48.391002 containerd[1894]: time="2024-12-13T01:31:48.390965110Z" level=info msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.436 [WARNING][3928] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"232f4630-aec0-4a84-977d-26cdc5c1e9a5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417", Pod:"nginx-deployment-8587fbcb89-wcwct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9f6506c4951", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.436 [INFO][3928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.436 [INFO][3928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" iface="eth0" netns="" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.436 [INFO][3928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.436 [INFO][3928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.463 [INFO][3935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.464 [INFO][3935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.464 [INFO][3935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.474 [WARNING][3935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.474 [INFO][3935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.477 [INFO][3935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:48.479860 containerd[1894]: 2024-12-13 01:31:48.478 [INFO][3928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.480834 containerd[1894]: time="2024-12-13T01:31:48.479902897Z" level=info msg="TearDown network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" successfully" Dec 13 01:31:48.480834 containerd[1894]: time="2024-12-13T01:31:48.479966619Z" level=info msg="StopPodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" returns successfully" Dec 13 01:31:48.480834 containerd[1894]: time="2024-12-13T01:31:48.480493296Z" level=info msg="RemovePodSandbox for \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" Dec 13 01:31:48.480834 containerd[1894]: time="2024-12-13T01:31:48.480525308Z" level=info msg="Forcibly stopping sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\"" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.529 [WARNING][3953] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"232f4630-aec0-4a84-977d-26cdc5c1e9a5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"1047017fa8189e1d5c584263655cc14dd1ed1e2173d2ffd7a71acfa561293417", Pod:"nginx-deployment-8587fbcb89-wcwct", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9f6506c4951", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.530 [INFO][3953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.530 [INFO][3953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" iface="eth0" netns="" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.530 [INFO][3953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.530 [INFO][3953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.557 [INFO][3959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.558 [INFO][3959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.558 [INFO][3959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.568 [WARNING][3959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.568 [INFO][3959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" HandleID="k8s-pod-network.4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Workload="172.31.22.26-k8s-nginx--deployment--8587fbcb89--wcwct-eth0" Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.570 [INFO][3959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:31:48.573599 containerd[1894]: 2024-12-13 01:31:48.572 [INFO][3953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64" Dec 13 01:31:48.574385 containerd[1894]: time="2024-12-13T01:31:48.573653009Z" level=info msg="TearDown network for sandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" successfully" Dec 13 01:31:48.577741 containerd[1894]: time="2024-12-13T01:31:48.577681329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:31:48.577994 containerd[1894]: time="2024-12-13T01:31:48.577743559Z" level=info msg="RemovePodSandbox \"4d92843193974d143ead4e3ffdcac5cd0f32ab7e1dc579eb88ac91d50add9a64\" returns successfully" Dec 13 01:31:49.103347 kubelet[2343]: E1213 01:31:49.103307 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:50.103879 kubelet[2343]: E1213 01:31:50.103821 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:51.105026 kubelet[2343]: E1213 01:31:51.104971 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:52.105511 kubelet[2343]: E1213 01:31:52.105452 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:53.106285 kubelet[2343]: E1213 01:31:53.106224 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:54.106951 kubelet[2343]: E1213 01:31:54.106879 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:55.107762 kubelet[2343]: E1213 01:31:55.107706 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:56.109151 kubelet[2343]: E1213 01:31:56.108794 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:57.109770 kubelet[2343]: E1213 01:31:57.109712 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:58.110548 kubelet[2343]: E1213 01:31:58.110494 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:59.111254 kubelet[2343]: E1213 01:31:59.111195 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:00.111494 kubelet[2343]: E1213 01:32:00.111437 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:01.113107 kubelet[2343]: E1213 01:32:01.113024 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:02.113615 kubelet[2343]: E1213 01:32:02.113486 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:03.114543 kubelet[2343]: E1213 01:32:03.114489 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:04.115675 kubelet[2343]: E1213 01:32:04.115623 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:05.116616 kubelet[2343]: E1213 01:32:05.116561 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:06.117318 kubelet[2343]: E1213 01:32:06.117274 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:07.117560 kubelet[2343]: E1213 01:32:07.117427 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:07.294349 systemd[1]: Created slice kubepods-besteffort-pod0f4716d5_64c4_4945_8e3a_57c3ded606d7.slice - libcontainer container kubepods-besteffort-pod0f4716d5_64c4_4945_8e3a_57c3ded606d7.slice. Dec 13 01:32:07.362521 kubelet[2343]: I1213 01:32:07.362249 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a4f75e9e-b0a2-40cb-8a08-025383e23f5a\" (UniqueName: \"kubernetes.io/nfs/0f4716d5-64c4-4945-8e3a-57c3ded606d7-pvc-a4f75e9e-b0a2-40cb-8a08-025383e23f5a\") pod \"test-pod-1\" (UID: \"0f4716d5-64c4-4945-8e3a-57c3ded606d7\") " pod="default/test-pod-1" Dec 13 01:32:07.362521 kubelet[2343]: I1213 01:32:07.362390 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvkf\" (UniqueName: \"kubernetes.io/projected/0f4716d5-64c4-4945-8e3a-57c3ded606d7-kube-api-access-qvvkf\") pod \"test-pod-1\" (UID: \"0f4716d5-64c4-4945-8e3a-57c3ded606d7\") " pod="default/test-pod-1" Dec 13 01:32:07.545025 kernel: FS-Cache: Loaded Dec 13 01:32:07.753585 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:32:07.753719 kernel: RPC: Registered udp transport module. Dec 13 01:32:07.753804 kernel: RPC: Registered tcp transport module. Dec 13 01:32:07.753824 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:32:07.753842 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:32:08.033065 kubelet[2343]: E1213 01:32:08.032852 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:08.118738 kubelet[2343]: E1213 01:32:08.118161 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:08.169031 kernel: NFS: Registering the id_resolver key type Dec 13 01:32:08.169272 kernel: Key type id_resolver registered Dec 13 01:32:08.169468 kernel: Key type id_legacy registered Dec 13 01:32:08.243333 nfsidmap[4027]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:32:08.258399 nfsidmap[4029]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:32:08.502220 containerd[1894]: time="2024-12-13T01:32:08.502110781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0f4716d5-64c4-4945-8e3a-57c3ded606d7,Namespace:default,Attempt:0,}" Dec 13 01:32:08.734377 systemd-networkd[1713]: cali5ec59c6bf6e: Link UP Dec 13 01:32:08.740409 (udev-worker)[4014]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:32:08.741025 systemd-networkd[1713]: cali5ec59c6bf6e: Gained carrier Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.587 [INFO][4030] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.26-k8s-test--pod--1-eth0 default 0f4716d5-64c4-4945-8e3a-57c3ded606d7 1219 0 2024-12-13 01:31:37 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.26 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.588 [INFO][4030] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.662 [INFO][4041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" HandleID="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Workload="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.673 [INFO][4041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" HandleID="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Workload="172.31.22.26-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.26", "pod":"test-pod-1", "timestamp":"2024-12-13 01:32:08.662693565 +0000 UTC"}, Hostname:"172.31.22.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.673 [INFO][4041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.673 [INFO][4041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.673 [INFO][4041] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.26' Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.676 [INFO][4041] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.680 [INFO][4041] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.689 [INFO][4041] ipam/ipam.go 489: Trying affinity for 192.168.34.0/26 host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.692 [INFO][4041] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.695 [INFO][4041] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.695 [INFO][4041] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.705 [INFO][4041] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077 Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.714 [INFO][4041] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.727 [INFO][4041] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.4/26] block=192.168.34.0/26 handle="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.728 [INFO][4041] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.4/26] handle="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" host="172.31.22.26" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.728 [INFO][4041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.728 [INFO][4041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.4/26] IPv6=[] ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" HandleID="k8s-pod-network.6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Workload="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.767316 containerd[1894]: 2024-12-13 01:32:08.730 [INFO][4030] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0f4716d5-64c4-4945-8e3a-57c3ded606d7", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:08.782381 containerd[1894]: 2024-12-13 01:32:08.730 [INFO][4030] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.4/32] ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.782381 containerd[1894]: 2024-12-13 01:32:08.730 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.782381 containerd[1894]: 2024-12-13 01:32:08.735 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.782381 containerd[1894]: 2024-12-13 01:32:08.738 [INFO][4030] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0f4716d5-64c4-4945-8e3a-57c3ded606d7", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.26", ContainerID:"6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"12:15:af:f0:33:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:32:08.782381 containerd[1894]: 2024-12-13 01:32:08.755 [INFO][4030] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.26-k8s-test--pod--1-eth0" Dec 13 01:32:08.875437 containerd[1894]: time="2024-12-13T01:32:08.875130268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:08.875437 containerd[1894]: time="2024-12-13T01:32:08.875215382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:08.875437 containerd[1894]: time="2024-12-13T01:32:08.875231381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:08.875437 containerd[1894]: time="2024-12-13T01:32:08.875346727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:08.913171 systemd[1]: run-containerd-runc-k8s.io-6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077-runc.kXl2cl.mount: Deactivated successfully. Dec 13 01:32:08.922191 systemd[1]: Started cri-containerd-6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077.scope - libcontainer container 6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077. Dec 13 01:32:09.032673 containerd[1894]: time="2024-12-13T01:32:09.031926353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0f4716d5-64c4-4945-8e3a-57c3ded606d7,Namespace:default,Attempt:0,} returns sandbox id \"6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077\"" Dec 13 01:32:09.034518 containerd[1894]: time="2024-12-13T01:32:09.034476441Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:32:09.118390 kubelet[2343]: E1213 01:32:09.118334 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:09.346794 containerd[1894]: time="2024-12-13T01:32:09.346745847Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.349118 containerd[1894]: time="2024-12-13T01:32:09.349042860Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:32:09.352508 containerd[1894]: time="2024-12-13T01:32:09.352465568Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 317.940973ms" Dec 13 01:32:09.352508 containerd[1894]: time="2024-12-13T01:32:09.352507913Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:32:09.354814 containerd[1894]: time="2024-12-13T01:32:09.354777911Z" level=info msg="CreateContainer within sandbox \"6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:32:09.378642 containerd[1894]: time="2024-12-13T01:32:09.378582995Z" level=info msg="CreateContainer within sandbox \"6c79de087e5e2da6999a6717518b626bdffe30e99ff92b6a133c13c778db0077\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"513bf29b599c9397fdbfbbc1fc040dcde854d19cb73d673e45d92e7958788b49\"" Dec 13 01:32:09.379501 containerd[1894]: time="2024-12-13T01:32:09.379466137Z" level=info msg="StartContainer for \"513bf29b599c9397fdbfbbc1fc040dcde854d19cb73d673e45d92e7958788b49\"" Dec 13 01:32:09.439210 systemd[1]: Started cri-containerd-513bf29b599c9397fdbfbbc1fc040dcde854d19cb73d673e45d92e7958788b49.scope - libcontainer container 513bf29b599c9397fdbfbbc1fc040dcde854d19cb73d673e45d92e7958788b49. Dec 13 01:32:09.485327 containerd[1894]: time="2024-12-13T01:32:09.485234033Z" level=info msg="StartContainer for \"513bf29b599c9397fdbfbbc1fc040dcde854d19cb73d673e45d92e7958788b49\" returns successfully" Dec 13 01:32:09.849008 kubelet[2343]: I1213 01:32:09.848923 2343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.529611153 podStartE2EDuration="32.848887987s" podCreationTimestamp="2024-12-13 01:31:37 +0000 UTC" firstStartedPulling="2024-12-13 01:32:09.034076463 +0000 UTC m=+81.679661212" lastFinishedPulling="2024-12-13 01:32:09.3533533 +0000 UTC m=+81.998938046" observedRunningTime="2024-12-13 01:32:09.848887374 +0000 UTC m=+82.494472130" watchObservedRunningTime="2024-12-13 01:32:09.848887987 +0000 UTC m=+82.494472744" Dec 13 01:32:10.119397 kubelet[2343]: E1213 01:32:10.119258 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:10.387579 systemd-networkd[1713]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 01:32:11.120437 kubelet[2343]: E1213 01:32:11.120381 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:12.121349 kubelet[2343]: E1213 01:32:12.121284 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:13.122053 kubelet[2343]: E1213 01:32:13.121958 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:13.382473 ntpd[1864]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:32:13.383064 ntpd[1864]: 13 Dec 01:32:13 ntpd[1864]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:32:14.122273 kubelet[2343]: E1213 01:32:14.122217 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:15.122804 kubelet[2343]: E1213 01:32:15.122752 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:16.122978 kubelet[2343]: E1213 01:32:16.122914 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:17.123889 kubelet[2343]: E1213 01:32:17.123837 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:18.124231 kubelet[2343]: E1213 01:32:18.124138 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:19.125282 kubelet[2343]: E1213 01:32:19.125224 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:20.126104 kubelet[2343]: E1213 01:32:20.125898 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:21.126911 kubelet[2343]: E1213 01:32:21.126855 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:22.127518 kubelet[2343]: E1213 01:32:22.127478 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:23.128224 kubelet[2343]: E1213 01:32:23.128168 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:24.128378 kubelet[2343]: E1213 01:32:24.128333 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:25.129603 kubelet[2343]: E1213 01:32:25.129544 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:26.130428 kubelet[2343]: E1213 01:32:26.130370 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:27.131328 kubelet[2343]: E1213 01:32:27.131273 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:28.028290 systemd[1]: run-containerd-runc-k8s.io-0c272a9a3c52858d5cced9e48d28fcf0c02bd9f7a4952ad7438518855dd63939-runc.ox9RY8.mount: Deactivated successfully. Dec 13 01:32:28.032295 kubelet[2343]: E1213 01:32:28.032224 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:28.132003 kubelet[2343]: E1213 01:32:28.131920 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:29.132452 kubelet[2343]: E1213 01:32:29.132386 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:30.133292 kubelet[2343]: E1213 01:32:30.133166 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:30.584738 kubelet[2343]: E1213 01:32:30.584666 2343 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:32:31.134405 kubelet[2343]: E1213 01:32:31.134342 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:32.135439 kubelet[2343]: E1213 01:32:32.135307 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:33.136455 kubelet[2343]: E1213 01:32:33.136402 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:34.137531 kubelet[2343]: E1213 01:32:34.137418 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:35.138613 kubelet[2343]: E1213 01:32:35.138556 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:36.139776 kubelet[2343]: E1213 01:32:36.139716 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:37.139970 kubelet[2343]: E1213 01:32:37.139897 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:38.140425 kubelet[2343]: E1213 01:32:38.140365 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:39.140953 kubelet[2343]: E1213 01:32:39.140895 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:40.141400 kubelet[2343]: E1213 01:32:40.141347 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:40.585722 kubelet[2343]: E1213 01:32:40.585665 2343 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:32:41.142150 kubelet[2343]: E1213 01:32:41.142092 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:42.142531 kubelet[2343]: E1213 01:32:42.142453 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:43.142903 kubelet[2343]: E1213 01:32:43.142842 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:44.144022 kubelet[2343]: E1213 01:32:44.143979 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:45.144986 kubelet[2343]: E1213 01:32:45.144923 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:46.145116 kubelet[2343]: E1213 01:32:46.145056 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:47.146131 kubelet[2343]: E1213 01:32:47.146090 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:48.032796 kubelet[2343]: E1213 01:32:48.032740 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:48.146841 kubelet[2343]: E1213 01:32:48.146730 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:49.147189 kubelet[2343]: E1213 01:32:49.147125 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:50.147634 kubelet[2343]: E1213 01:32:50.147556 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:50.589206 kubelet[2343]: E1213 01:32:50.589149 2343 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:32:51.148330 kubelet[2343]: E1213 01:32:51.148275 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:52.149096 kubelet[2343]: E1213 01:32:52.149040 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:53.149377 kubelet[2343]: E1213 01:32:53.149325 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:54.149963 kubelet[2343]: E1213 01:32:54.149904 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:55.151092 kubelet[2343]: E1213 01:32:55.151035 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:56.151702 kubelet[2343]: E1213 01:32:56.151643 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:57.152370 kubelet[2343]: E1213 01:32:57.152308 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:58.152641 kubelet[2343]: E1213 01:32:58.152585 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:32:59.153040 kubelet[2343]: E1213 01:32:59.152907 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:00.153790 kubelet[2343]: E1213 01:33:00.153639 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:00.590438 kubelet[2343]: E1213 01:33:00.590361 2343 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:33:00.650578 kubelet[2343]: E1213 01:33:00.649965 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.20:6443/api/v1/namespaces/calico-system/events\": read tcp 172.31.22.26:39232->172.31.31.20:6443: read: connection reset by peer" event=< Dec 13 01:33:00.650578 kubelet[2343]: &Event{ObjectMeta:{calico-node-rxh4k.1810988ab7208716 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-rxh4k,UID:5eab126a-0555-46b7-b523-f2c15aaf03c4,APIVersion:v1,ResourceVersion:825,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2024-12-13 01:32:58.078 [INFO][361] node/health.go 202: Number of node(s) with BGP peering established = 0 Dec 13 01:33:00.650578 kubelet[2343]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.31.20 Dec 13 01:33:00.650578 kubelet[2343]: ,Source:EventSource{Component:kubelet,Host:172.31.22.26,},FirstTimestamp:2024-12-13 01:32:58.082912022 +0000 UTC m=+130.728496780,LastTimestamp:2024-12-13 01:32:58.082912022 +0000 UTC m=+130.728496780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.26,} Dec 13 01:33:00.650578 kubelet[2343]: > Dec 13 01:33:00.658211 kubelet[2343]: E1213 01:33:00.658161 2343 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": read tcp 172.31.22.26:39232->172.31.31.20:6443: read: connection reset by peer" Dec 13 01:33:00.661001 kubelet[2343]: I1213 01:33:00.660700 2343 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 01:33:00.670875 kubelet[2343]: E1213 01:33:00.667811 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="200ms" Dec 13 01:33:00.870987 kubelet[2343]: E1213 01:33:00.870834 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="400ms" Dec 13 01:33:01.154901 kubelet[2343]: E1213 01:33:01.154763 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:01.272814 kubelet[2343]: E1213 01:33:01.272755 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="800ms" Dec 13 01:33:02.155582 kubelet[2343]: E1213 01:33:02.155288 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:03.156628 kubelet[2343]: E1213 01:33:03.156571 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:04.157481 kubelet[2343]: E1213 01:33:04.157420 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:05.158004 kubelet[2343]: E1213 01:33:05.157952 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:06.159153 kubelet[2343]: E1213 01:33:06.159088 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:07.159845 kubelet[2343]: E1213 01:33:07.159789 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:08.032574 kubelet[2343]: E1213 01:33:08.032519 2343 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:08.160304 kubelet[2343]: E1213 01:33:08.160246 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:09.160688 kubelet[2343]: E1213 01:33:09.160627 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:10.161252 kubelet[2343]: E1213 01:33:10.161193 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:11.161996 kubelet[2343]: E1213 01:33:11.161945 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:12.074560 kubelet[2343]: E1213 01:33:12.074494 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.26?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Dec 13 01:33:12.163072 kubelet[2343]: E1213 01:33:12.163010 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:13.164234 kubelet[2343]: E1213 01:33:13.164179 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:14.164444 kubelet[2343]: E1213 01:33:14.164381 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:15.165718 kubelet[2343]: E1213 01:33:15.165657 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:16.166431 kubelet[2343]: E1213 01:33:16.166348 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:33:17.167248 kubelet[2343]: E1213 01:33:17.167192 2343 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"