Jan 17 12:17:15.020229 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:17:15.020414 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:15.020433 kernel: BIOS-provided physical RAM map: Jan 17 12:17:15.020444 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:17:15.020455 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:17:15.020466 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:17:15.020602 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 17 12:17:15.020686 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 17 12:17:15.023029 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 17 12:17:15.023045 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:17:15.023059 kernel: NX (Execute Disable) protection: active Jan 17 12:17:15.023071 kernel: APIC: Static calls initialized Jan 17 12:17:15.023084 kernel: SMBIOS 2.7 present. Jan 17 12:17:15.023097 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 12:17:15.023120 kernel: Hypervisor detected: KVM Jan 17 12:17:15.023134 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:17:15.023149 kernel: kvm-clock: using sched offset of 6038898568 cycles Jan 17 12:17:15.023164 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:17:15.023179 kernel: tsc: Detected 2499.996 MHz processor Jan 17 12:17:15.023194 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:17:15.023209 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:17:15.023226 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 17 12:17:15.023240 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:17:15.023255 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:17:15.023269 kernel: Using GB pages for direct mapping Jan 17 12:17:15.023283 kernel: ACPI: Early table checksum verification disabled Jan 17 12:17:15.023297 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 17 12:17:15.023311 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 17 12:17:15.023325 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:17:15.023340 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 12:17:15.023356 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 17 12:17:15.023371 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:17:15.023385 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:17:15.023399 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 12:17:15.023412 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:17:15.023472 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 12:17:15.023541 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 12:17:15.023558 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:17:15.023572 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 17 12:17:15.023591 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 17 12:17:15.023611 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 17 12:17:15.023625 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 17 12:17:15.023640 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 17 12:17:15.023655 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 17 12:17:15.023673 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 17 12:17:15.023700 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 17 12:17:15.023715 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 17 12:17:15.023730 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 17 12:17:15.023746 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:17:15.023761 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:17:15.023776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 12:17:15.023792 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 12:17:15.023807 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 17 12:17:15.023825 kernel: Zone ranges: Jan 17 12:17:15.023840 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:17:15.023855 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 17 12:17:15.023870 kernel: Normal empty Jan 17 12:17:15.023885 kernel: Movable zone start for each node Jan 17 12:17:15.023900 kernel: Early memory node ranges Jan 17 12:17:15.023915 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:17:15.023930 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 17 12:17:15.023945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 17 12:17:15.023963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:17:15.024031 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:17:15.024048 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 17 12:17:15.024063 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:17:15.024078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:17:15.024093 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 12:17:15.024108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:17:15.024124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:17:15.024138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:17:15.024153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:17:15.024172 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:17:15.024187 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:17:15.024201 kernel: TSC deadline timer available Jan 17 12:17:15.024214 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:17:15.024242 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:17:15.024269 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 12:17:15.024296 kernel: Booting paravirtualized kernel on KVM Jan 17 12:17:15.024308 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:17:15.024322 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:17:15.024340 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:17:15.024354 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:17:15.024368 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:17:15.024381 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:17:15.024393 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:17:15.024408 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:15.024424 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:17:15.024438 kernel: random: crng init done Jan 17 12:17:15.024454 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:17:15.024467 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:17:15.024482 kernel: Fallback order for Node 0: 0 Jan 17 12:17:15.024495 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 17 12:17:15.024506 kernel: Policy zone: DMA32 Jan 17 12:17:15.024527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:17:15.024540 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved) Jan 17 12:17:15.024553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:17:15.024657 kernel: Kernel/User page tables isolation: enabled Jan 17 12:17:15.024677 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:17:15.028753 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:17:15.028791 kernel: Dynamic Preempt: voluntary Jan 17 12:17:15.028806 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:17:15.028823 kernel: rcu: RCU event tracing is enabled. Jan 17 12:17:15.028837 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:17:15.028852 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:17:15.028868 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:17:15.028883 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:17:15.028958 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:17:15.028976 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:17:15.028991 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:17:15.029007 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:17:15.029022 kernel: Console: colour VGA+ 80x25 Jan 17 12:17:15.029038 kernel: printk: console [ttyS0] enabled Jan 17 12:17:15.029053 kernel: ACPI: Core revision 20230628 Jan 17 12:17:15.029069 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 12:17:15.029085 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:17:15.029105 kernel: x2apic enabled Jan 17 12:17:15.029121 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:17:15.029203 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 12:17:15.029225 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 17 12:17:15.029243 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:17:15.029260 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:17:15.029277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:17:15.029293 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:17:15.029308 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:17:15.029324 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:17:15.029341 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 12:17:15.029357 kernel: RETBleed: Vulnerable Jan 17 12:17:15.029374 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:17:15.029396 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:17:15.029412 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:17:15.029428 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:17:15.029450 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:17:15.029466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:17:15.029482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:17:15.029501 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 12:17:15.029518 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 12:17:15.029534 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 12:17:15.029550 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 12:17:15.029567 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 12:17:15.029583 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 12:17:15.029599 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:17:15.029615 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 12:17:15.029631 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 12:17:15.029648 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 12:17:15.029664 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 12:17:15.029684 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 12:17:15.030754 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 12:17:15.030775 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 12:17:15.030792 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:17:15.030809 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:17:15.030825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:17:15.030842 kernel: landlock: Up and running. Jan 17 12:17:15.030858 kernel: SELinux: Initializing. Jan 17 12:17:15.030874 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:15.030890 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:15.030907 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 12:17:15.030928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:15.031030 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:15.031048 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:15.031065 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 12:17:15.031082 kernel: signal: max sigframe size: 3632 Jan 17 12:17:15.031098 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:17:15.031116 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:17:15.031131 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:17:15.031148 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:17:15.031169 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:17:15.031185 kernel: .... node #0, CPUs: #1 Jan 17 12:17:15.031203 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:17:15.031221 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:17:15.031237 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:17:15.031253 kernel: smpboot: Max logical packages: 1 Jan 17 12:17:15.031270 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 17 12:17:15.031286 kernel: devtmpfs: initialized Jan 17 12:17:15.031303 kernel: x86/mm: Memory block size: 128MB Jan 17 12:17:15.031323 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:17:15.031339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:17:15.031356 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:17:15.031372 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:17:15.031435 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:17:15.031452 kernel: audit: type=2000 audit(1737116234.063:1): state=initialized audit_enabled=0 res=1 Jan 17 12:17:15.031469 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:17:15.031512 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:17:15.031532 kernel: cpuidle: using governor menu Jan 17 12:17:15.031549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:17:15.031566 kernel: dca service started, version 1.12.1 Jan 17 12:17:15.031606 kernel: PCI: Using configuration type 1 for base access Jan 17 12:17:15.031623 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:17:15.031639 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:17:15.031655 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:17:15.036240 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:17:15.036269 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:17:15.036293 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:17:15.036310 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:17:15.036325 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:17:15.036342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:17:15.036359 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:17:15.036375 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:17:15.036392 kernel: ACPI: Interpreter enabled Jan 17 12:17:15.036409 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:17:15.036426 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:17:15.036443 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:17:15.036463 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:17:15.036479 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:17:15.036496 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:17:15.036767 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:17:15.037007 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:17:15.037211 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:17:15.037235 kernel: acpiphp: Slot [3] registered Jan 17 12:17:15.037257 kernel: acpiphp: Slot [4] registered Jan 17 12:17:15.037274 kernel: acpiphp: Slot [5] registered Jan 17 12:17:15.037292 kernel: acpiphp: Slot [6] registered Jan 17 12:17:15.037308 kernel: acpiphp: Slot [7] registered Jan 17 12:17:15.037325 kernel: acpiphp: Slot [8] registered Jan 17 12:17:15.037341 kernel: acpiphp: Slot [9] registered Jan 17 12:17:15.037358 kernel: acpiphp: Slot [10] registered Jan 17 12:17:15.037375 kernel: acpiphp: Slot [11] registered Jan 17 12:17:15.037390 kernel: acpiphp: Slot [12] registered Jan 17 12:17:15.037410 kernel: acpiphp: Slot [13] registered Jan 17 12:17:15.037426 kernel: acpiphp: Slot [14] registered Jan 17 12:17:15.037450 kernel: acpiphp: Slot [15] registered Jan 17 12:17:15.037466 kernel: acpiphp: Slot [16] registered Jan 17 12:17:15.037483 kernel: acpiphp: Slot [17] registered Jan 17 12:17:15.037500 kernel: acpiphp: Slot [18] registered Jan 17 12:17:15.037516 kernel: acpiphp: Slot [19] registered Jan 17 12:17:15.037533 kernel: acpiphp: Slot [20] registered Jan 17 12:17:15.037549 kernel: acpiphp: Slot [21] registered Jan 17 12:17:15.037566 kernel: acpiphp: Slot [22] registered Jan 17 12:17:15.037586 kernel: acpiphp: Slot [23] registered Jan 17 12:17:15.037602 kernel: acpiphp: Slot [24] registered Jan 17 12:17:15.037619 kernel: acpiphp: Slot [25] registered Jan 17 12:17:15.037635 kernel: acpiphp: Slot [26] registered Jan 17 12:17:15.037652 kernel: acpiphp: Slot [27] registered Jan 17 12:17:15.037667 kernel: acpiphp: Slot [28] registered Jan 17 12:17:15.037684 kernel: acpiphp: Slot [29] registered Jan 17 12:17:15.041310 kernel: acpiphp: Slot [30] registered Jan 17 12:17:15.041338 kernel: acpiphp: Slot [31] registered Jan 17 12:17:15.041361 kernel: PCI host bridge to bus 0000:00 Jan 17 12:17:15.041591 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:17:15.045808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:17:15.046032 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:17:15.046167 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:17:15.046291 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:17:15.046450 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:17:15.046610 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:17:15.047402 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 12:17:15.047578 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:17:15.049774 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 17 12:17:15.053615 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 12:17:15.053834 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 12:17:15.053980 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 12:17:15.054128 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 12:17:15.054360 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 12:17:15.054502 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 12:17:15.054648 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 12:17:15.054806 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 17 12:17:15.054943 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:17:15.055078 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:17:15.055237 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:17:15.056851 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 17 12:17:15.057020 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:17:15.057214 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 17 12:17:15.057238 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:17:15.057255 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:17:15.057278 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:17:15.057295 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:17:15.057311 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:17:15.057327 kernel: iommu: Default domain type: Translated Jan 17 12:17:15.057344 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:17:15.057360 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:17:15.057455 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:17:15.057475 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:17:15.057492 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 17 12:17:15.057649 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 12:17:15.057811 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 12:17:15.057950 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:17:15.058271 kernel: vgaarb: loaded Jan 17 12:17:15.058298 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 12:17:15.058315 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 12:17:15.058330 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:17:15.058344 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:17:15.058359 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:17:15.058381 kernel: pnp: PnP ACPI init Jan 17 12:17:15.058396 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:17:15.058412 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:17:15.058428 kernel: NET: Registered PF_INET protocol family Jan 17 12:17:15.058443 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:17:15.058459 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:17:15.058475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:17:15.058491 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:17:15.058510 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:17:15.058525 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:17:15.058541 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:15.058557 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:15.058572 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:17:15.058588 kernel: NET: Registered PF_XDP protocol family Jan 17 12:17:15.063791 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:17:15.063933 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:17:15.064047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:17:15.064291 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:17:15.064431 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:17:15.064452 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:17:15.064469 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:17:15.064485 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 12:17:15.064501 kernel: clocksource: Switched to clocksource tsc Jan 17 12:17:15.064516 kernel: Initialise system trusted keyrings Jan 17 12:17:15.064532 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:17:15.064552 kernel: Key type asymmetric registered Jan 17 12:17:15.064569 kernel: Asymmetric key parser 'x509' registered Jan 17 12:17:15.064593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:17:15.064614 kernel: io scheduler mq-deadline registered Jan 17 12:17:15.064630 kernel: io scheduler kyber registered Jan 17 12:17:15.064646 kernel: io scheduler bfq registered Jan 17 12:17:15.064662 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:17:15.064679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:17:15.064717 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:17:15.064738 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:17:15.064755 kernel: i8042: Warning: Keylock active Jan 17 12:17:15.064771 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:17:15.064787 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:17:15.065798 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:17:15.065948 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:17:15.066076 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:17:14 UTC (1737116234) Jan 17 12:17:15.066255 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:17:15.066284 kernel: intel_pstate: CPU model not supported Jan 17 12:17:15.066301 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:17:15.066358 kernel: Segment Routing with IPv6 Jan 17 12:17:15.066376 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:17:15.066394 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:17:15.066410 kernel: Key type dns_resolver registered Jan 17 12:17:15.066427 kernel: IPI shorthand broadcast: enabled Jan 17 12:17:15.066444 kernel: sched_clock: Marking stable (558106246, 248847655)->(901814585, -94860684) Jan 17 12:17:15.066509 kernel: registered taskstats version 1 Jan 17 12:17:15.066533 kernel: Loading compiled-in X.509 certificates Jan 17 12:17:15.066550 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:17:15.066567 kernel: Key type .fscrypt registered Jan 17 12:17:15.066583 kernel: Key type fscrypt-provisioning registered Jan 17 12:17:15.066600 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:17:15.066616 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:17:15.066633 kernel: ima: No architecture policies found Jan 17 12:17:15.066649 kernel: clk: Disabling unused clocks Jan 17 12:17:15.066666 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:17:15.066685 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:17:15.071764 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:17:15.071784 kernel: Run /init as init process Jan 17 12:17:15.071851 kernel: with arguments: Jan 17 12:17:15.071868 kernel: /init Jan 17 12:17:15.071884 kernel: with environment: Jan 17 12:17:15.071899 kernel: HOME=/ Jan 17 12:17:15.071916 kernel: TERM=linux Jan 17 12:17:15.071932 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:17:15.072081 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:15.072117 systemd[1]: Detected virtualization amazon. Jan 17 12:17:15.072138 systemd[1]: Detected architecture x86-64. Jan 17 12:17:15.072156 systemd[1]: Running in initrd. Jan 17 12:17:15.072174 systemd[1]: No hostname configured, using default hostname. Jan 17 12:17:15.072194 systemd[1]: Hostname set to . Jan 17 12:17:15.072213 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:15.072231 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:17:15.072249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:15.072267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:15.072287 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:17:15.072305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:15.072323 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:17:15.072344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:17:15.072365 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:17:15.072384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:17:15.072402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:15.072420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:15.072438 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:15.072459 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:15.072477 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:15.072495 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:15.072513 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:15.072532 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:15.072550 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:15.072569 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:15.072587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:15.072605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:15.072626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:15.072644 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:15.072662 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:17:15.072679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:15.072709 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:17:15.072728 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:17:15.072746 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:15.072768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:15.072786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:15.072804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:15.072822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:15.072840 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:17:15.072910 systemd-journald[178]: Collecting audit messages is disabled. Jan 17 12:17:15.072957 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:17:15.072978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:15.072997 systemd-journald[178]: Journal started Jan 17 12:17:15.073037 systemd-journald[178]: Runtime Journal (/run/log/journal/ec27d2a42ed4a86493532360426a3783) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:17:15.024556 systemd-modules-load[179]: Inserted module 'overlay' Jan 17 12:17:15.197456 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:17:15.197506 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:15.197531 kernel: Bridge firewalling registered Jan 17 12:17:15.080530 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 17 12:17:15.193353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:15.193889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:15.200891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:15.206838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:15.211958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:15.214740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:15.232417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:15.245891 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:15.250235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:15.253945 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:15.271888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:15.288036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:15.295889 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:17:15.323722 dracut-cmdline[214]: dracut-dracut-053 Jan 17 12:17:15.328410 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:15.345109 systemd-resolved[204]: Positive Trust Anchors: Jan 17 12:17:15.345134 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:15.345192 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:15.365543 systemd-resolved[204]: Defaulting to hostname 'linux'. Jan 17 12:17:15.368812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:15.371623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:15.431717 kernel: SCSI subsystem initialized Jan 17 12:17:15.442726 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:17:15.454719 kernel: iscsi: registered transport (tcp) Jan 17 12:17:15.477748 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:17:15.477827 kernel: QLogic iSCSI HBA Driver Jan 17 12:17:15.539831 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:15.559295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:17:15.613718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:17:15.613795 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:17:15.613816 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:17:15.661745 kernel: raid6: avx512x4 gen() 14438 MB/s Jan 17 12:17:15.678718 kernel: raid6: avx512x2 gen() 14595 MB/s Jan 17 12:17:15.695835 kernel: raid6: avx512x1 gen() 13042 MB/s Jan 17 12:17:15.712725 kernel: raid6: avx2x4 gen() 11904 MB/s Jan 17 12:17:15.729812 kernel: raid6: avx2x2 gen() 12870 MB/s Jan 17 12:17:15.748144 kernel: raid6: avx2x1 gen() 11500 MB/s Jan 17 12:17:15.748218 kernel: raid6: using algorithm avx512x2 gen() 14595 MB/s Jan 17 12:17:15.766727 kernel: raid6: .... xor() 20824 MB/s, rmw enabled Jan 17 12:17:15.766800 kernel: raid6: using avx512x2 recovery algorithm Jan 17 12:17:15.810751 kernel: xor: automatically using best checksumming function avx Jan 17 12:17:16.149301 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:17:16.173382 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:16.190997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:16.225356 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 17 12:17:16.243474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:16.262336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:17:16.288812 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jan 17 12:17:16.323319 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:16.332237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:16.394514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:16.406351 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:17:16.443955 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:16.448003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:16.449727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:16.451683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:16.464853 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:17:16.516472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:16.538813 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:17:16.562398 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:17:16.586973 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:17:16.587174 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 12:17:16.587337 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:17:16.587358 kernel: AES CTR mode by8 optimization enabled Jan 17 12:17:16.587378 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:63:b2:5d:96:5d Jan 17 12:17:16.568475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:16.568750 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:16.571193 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:16.572464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:16.572815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:16.577066 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:16.589528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:16.600164 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:16.638231 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:17:16.638501 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:17:16.650724 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:17:16.655721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:17:16.655789 kernel: GPT:9289727 != 16777215 Jan 17 12:17:16.655811 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:17:16.655829 kernel: GPT:9289727 != 16777215 Jan 17 12:17:16.655846 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:17:16.655865 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:17:16.799723 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (454) Jan 17 12:17:16.810606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:16.813753 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (461) Jan 17 12:17:16.838921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:16.891432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:17:16.926519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:17:16.930037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:16.947203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:17:16.960108 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:17:16.961843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:17:16.975497 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:17:16.997120 disk-uuid[630]: Primary Header is updated. Jan 17 12:17:16.997120 disk-uuid[630]: Secondary Entries is updated. Jan 17 12:17:16.997120 disk-uuid[630]: Secondary Header is updated. Jan 17 12:17:17.004721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:17:17.010733 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:17:17.017718 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:17:18.025266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:17:18.029860 disk-uuid[631]: The operation has completed successfully. Jan 17 12:17:18.256222 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:17:18.256366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:17:18.286987 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:17:18.304154 sh[972]: Success Jan 17 12:17:18.321271 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:17:18.463939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:17:18.478917 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:17:18.493069 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:17:18.526539 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:17:18.526606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:18.526634 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:17:18.526653 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:17:18.527209 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:17:18.631730 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:17:18.633581 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:17:18.634378 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:17:18.650043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:17:18.652877 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:17:18.683735 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:18.683802 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:18.683821 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:17:18.691347 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:17:18.706757 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:18.707615 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:17:18.717103 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:17:18.728948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:17:18.784338 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:18.793033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:18.848814 systemd-networkd[1164]: lo: Link UP Jan 17 12:17:18.848825 systemd-networkd[1164]: lo: Gained carrier Jan 17 12:17:18.851272 systemd-networkd[1164]: Enumeration completed Jan 17 12:17:18.851766 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:18.851771 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:18.853137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:18.856178 systemd[1]: Reached target network.target - Network. Jan 17 12:17:18.863856 systemd-networkd[1164]: eth0: Link UP Jan 17 12:17:18.863863 systemd-networkd[1164]: eth0: Gained carrier Jan 17 12:17:18.863877 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:18.886782 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.23.9/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:17:19.070248 ignition[1095]: Ignition 2.19.0 Jan 17 12:17:19.070263 ignition[1095]: Stage: fetch-offline Jan 17 12:17:19.070524 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:19.070537 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:19.071042 ignition[1095]: Ignition finished successfully Jan 17 12:17:19.076385 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:19.087940 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:17:19.111936 ignition[1173]: Ignition 2.19.0 Jan 17 12:17:19.111950 ignition[1173]: Stage: fetch Jan 17 12:17:19.112582 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:19.112595 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:19.112738 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:19.148212 ignition[1173]: PUT result: OK Jan 17 12:17:19.153352 ignition[1173]: parsed url from cmdline: "" Jan 17 12:17:19.153363 ignition[1173]: no config URL provided Jan 17 12:17:19.153374 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:19.153389 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:19.153415 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:19.161928 ignition[1173]: PUT result: OK Jan 17 12:17:19.162016 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:17:19.165600 ignition[1173]: GET result: OK Jan 17 12:17:19.166160 ignition[1173]: parsing config with SHA512: 4d683ff0906587c1fa23d205ce37b4ce57a1ccd287dca22b7e32090f462fad4a98fcf0325d5c0af533e4a5553202609793076a4a25ef75e5e72d2f8fd10750de Jan 17 12:17:19.171135 unknown[1173]: fetched base config from "system" Jan 17 12:17:19.171156 unknown[1173]: fetched base config from "system" Jan 17 12:17:19.172940 ignition[1173]: fetch: fetch complete Jan 17 12:17:19.171166 unknown[1173]: fetched user config from "aws" Jan 17 12:17:19.172949 ignition[1173]: fetch: fetch passed Jan 17 12:17:19.176212 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:17:19.173023 ignition[1173]: Ignition finished successfully Jan 17 12:17:19.194907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:17:19.214263 ignition[1179]: Ignition 2.19.0 Jan 17 12:17:19.214276 ignition[1179]: Stage: kargs Jan 17 12:17:19.214883 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:19.214897 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:19.215003 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:19.218069 ignition[1179]: PUT result: OK Jan 17 12:17:19.224124 ignition[1179]: kargs: kargs passed Jan 17 12:17:19.224384 ignition[1179]: Ignition finished successfully Jan 17 12:17:19.227137 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:17:19.235343 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:17:19.260024 ignition[1185]: Ignition 2.19.0 Jan 17 12:17:19.260037 ignition[1185]: Stage: disks Jan 17 12:17:19.261804 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:19.261819 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:19.262070 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:19.265626 ignition[1185]: PUT result: OK Jan 17 12:17:19.282664 ignition[1185]: disks: disks passed Jan 17 12:17:19.282763 ignition[1185]: Ignition finished successfully Jan 17 12:17:19.285968 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:17:19.288043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:19.296587 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:19.298957 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:19.301794 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:19.304260 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:19.313908 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:17:19.351159 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:17:19.355261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:17:19.365831 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:17:19.500087 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:17:19.501206 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:17:19.504624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:19.521939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:19.529002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:17:19.530996 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:17:19.531060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:17:19.531102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:19.544529 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:17:19.548520 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:17:19.559934 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1212) Jan 17 12:17:19.567067 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:19.567200 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:19.567555 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:17:19.587753 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:17:19.590395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:19.872733 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:17:19.895906 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:17:19.908759 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:17:19.920743 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:17:20.197220 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:20.208835 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:17:20.212899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:17:20.225900 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:17:20.227151 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:20.261877 ignition[1324]: INFO : Ignition 2.19.0 Jan 17 12:17:20.261877 ignition[1324]: INFO : Stage: mount Jan 17 12:17:20.266077 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:20.266077 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:20.266077 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:20.266077 ignition[1324]: INFO : PUT result: OK Jan 17 12:17:20.274634 ignition[1324]: INFO : mount: mount passed Jan 17 12:17:20.276321 ignition[1324]: INFO : Ignition finished successfully Jan 17 12:17:20.280656 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:17:20.292099 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:17:20.299211 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:17:20.310082 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:20.337018 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) Jan 17 12:17:20.337090 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:20.339133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:20.339190 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:17:20.357766 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:17:20.360721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:20.392649 ignition[1354]: INFO : Ignition 2.19.0 Jan 17 12:17:20.392649 ignition[1354]: INFO : Stage: files Jan 17 12:17:20.394855 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:20.394855 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:20.394855 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:20.399584 ignition[1354]: INFO : PUT result: OK Jan 17 12:17:20.403585 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:17:20.404937 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:17:20.404937 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:17:20.409798 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:17:20.411733 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:17:20.413626 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:17:20.411870 unknown[1354]: wrote ssh authorized keys file for user: core Jan 17 12:17:20.421074 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:17:20.423222 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:17:20.423222 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:20.423222 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:17:20.431903 systemd-networkd[1164]: eth0: Gained IPv6LL Jan 17 12:17:20.537963 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:17:20.695469 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:20.698016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:17:21.200080 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:17:21.493583 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:21.493583 ignition[1354]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:17:21.499555 ignition[1354]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:17:21.503785 ignition[1354]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:17:21.503785 ignition[1354]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:17:21.503785 ignition[1354]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:21.512163 ignition[1354]: INFO : files: files passed Jan 17 12:17:21.512163 ignition[1354]: INFO : Ignition finished successfully Jan 17 12:17:21.512242 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:17:21.528080 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:17:21.540828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:17:21.545117 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:17:21.546547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:17:21.562609 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:21.562609 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:21.574135 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:21.578376 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:21.578824 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:17:21.587920 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:17:21.667379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:17:21.667556 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:17:21.669327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:17:21.671281 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:17:21.672916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:17:21.682838 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:17:21.718283 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:21.726037 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:17:21.747660 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:21.747919 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:21.752832 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:17:21.754188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:17:21.754472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:21.758184 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:17:21.763959 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:17:21.765730 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:17:21.769705 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:21.771143 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:21.775269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:17:21.776808 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:21.783229 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:17:21.789562 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:17:21.792405 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:17:21.792554 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:17:21.792679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:21.798028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:21.800821 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:21.803309 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:17:21.803517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:21.807432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:17:21.808648 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:21.811210 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:17:21.812734 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:21.816197 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:17:21.818597 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:17:21.830211 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:17:21.850644 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:17:21.854884 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:17:21.858674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:21.863856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:17:21.865791 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:21.873106 ignition[1406]: INFO : Ignition 2.19.0 Jan 17 12:17:21.873106 ignition[1406]: INFO : Stage: umount Jan 17 12:17:21.873106 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:21.873106 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:17:21.873106 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:17:21.873106 ignition[1406]: INFO : PUT result: OK Jan 17 12:17:21.884713 ignition[1406]: INFO : umount: umount passed Jan 17 12:17:21.884713 ignition[1406]: INFO : Ignition finished successfully Jan 17 12:17:21.889251 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:17:21.889453 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:17:21.895840 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:17:21.895962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:17:21.906006 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:17:21.908466 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:17:21.908617 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:17:21.914054 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:17:21.914134 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:17:21.916598 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:17:21.916672 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:17:21.919615 systemd[1]: Stopped target network.target - Network. Jan 17 12:17:21.921840 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:17:21.921924 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:21.925638 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:17:21.927651 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:17:21.931629 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:21.938060 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:17:21.939141 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:17:21.942531 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:17:21.942589 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:21.946177 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:17:21.946239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:21.946328 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:17:21.946370 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:17:21.955707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:17:21.955793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:21.965127 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:17:21.968749 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:17:21.978357 systemd-networkd[1164]: eth0: DHCPv6 lease lost Jan 17 12:17:21.984414 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:17:21.984542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:17:21.994007 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:17:21.994161 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:17:21.999994 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:17:22.000369 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:17:22.003193 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:17:22.003355 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:22.006349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:17:22.006426 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:22.021837 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:17:22.023059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:17:22.023149 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:22.024780 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:17:22.024843 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:22.027615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:17:22.027679 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:22.031656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:17:22.031729 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:22.031893 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:22.052418 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:17:22.052543 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:17:22.054449 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:17:22.055993 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:22.063110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:17:22.063184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:22.066023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:17:22.066178 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:22.068208 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:17:22.068268 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:22.072921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:17:22.074226 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:22.076827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:22.076893 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:22.087913 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:17:22.089292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:17:22.090850 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:22.091158 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:17:22.091220 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:22.094972 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:17:22.095026 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:22.096789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:22.096839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:22.121993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:17:22.122164 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:17:22.127413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:17:22.134870 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:17:22.157143 systemd[1]: Switching root. Jan 17 12:17:22.191326 systemd-journald[178]: Journal stopped Jan 17 12:17:24.759737 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 17 12:17:24.759836 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:17:24.759859 kernel: SELinux: policy capability open_perms=1 Jan 17 12:17:24.759876 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:17:24.759894 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:17:24.759912 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:17:24.759929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:17:24.759956 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:17:24.759973 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:17:24.759994 kernel: audit: type=1403 audit(1737116242.970:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:17:24.760242 systemd[1]: Successfully loaded SELinux policy in 63.650ms. Jan 17 12:17:24.760283 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.305ms. Jan 17 12:17:24.760305 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:24.760325 systemd[1]: Detected virtualization amazon. Jan 17 12:17:24.760345 systemd[1]: Detected architecture x86-64. Jan 17 12:17:24.760363 systemd[1]: Detected first boot. Jan 17 12:17:24.760382 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:24.760401 zram_generator::config[1466]: No configuration found. Jan 17 12:17:24.760460 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:17:24.760483 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:17:24.760504 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:17:24.760526 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:17:24.760546 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:17:24.760564 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:17:24.760587 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:17:24.760607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:17:24.760628 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:17:24.760652 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:17:24.760673 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:17:24.760712 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:24.760744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:24.760764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:17:24.760784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:17:24.760805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:17:24.760831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:24.760851 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:17:24.760869 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:24.760888 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:17:24.760907 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:24.760930 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:24.760949 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:24.760969 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:24.760988 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:17:24.761010 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:17:24.761031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:24.761107 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:24.761128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:24.761148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:24.766777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:24.766870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:17:24.766916 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:17:24.766936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:17:24.766986 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:17:24.767007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:24.767025 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:17:24.767068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:17:24.767088 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:17:24.767107 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:17:24.767149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:24.767170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:24.767189 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:17:24.767322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:24.767345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:24.767365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:24.767419 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:17:24.767445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:24.767491 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:17:24.767512 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:17:24.767532 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:17:24.767583 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:24.767607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:24.767627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:17:24.767672 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:17:24.767705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:24.767837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:24.767864 kernel: loop: module loaded Jan 17 12:17:24.767884 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:17:24.767937 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:17:24.767964 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:17:24.767983 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:17:24.768027 kernel: fuse: init (API version 7.39) Jan 17 12:17:24.768105 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:17:24.768126 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:17:24.768145 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:17:24.768193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:24.768212 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:17:24.768236 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:17:24.768257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:24.768279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:24.768300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:24.768369 systemd-journald[1570]: Collecting audit messages is disabled. Jan 17 12:17:24.768449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:24.768473 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:17:24.768493 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:17:24.768513 systemd-journald[1570]: Journal started Jan 17 12:17:24.768654 systemd-journald[1570]: Runtime Journal (/run/log/journal/ec27d2a42ed4a86493532360426a3783) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:17:24.775816 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:24.790205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:24.790478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:24.792685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:24.796064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:17:24.798140 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:17:24.830590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:17:24.863726 kernel: ACPI: bus type drm_connector registered Jan 17 12:17:24.866822 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:17:24.874946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:17:24.876455 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:17:24.879873 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:17:24.895896 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:17:24.897517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:24.901011 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:17:24.902499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:24.906046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:24.921364 systemd-journald[1570]: Time spent on flushing to /var/log/journal/ec27d2a42ed4a86493532360426a3783 is 60.620ms for 944 entries. Jan 17 12:17:24.921364 systemd-journald[1570]: System Journal (/var/log/journal/ec27d2a42ed4a86493532360426a3783) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:17:24.990583 systemd-journald[1570]: Received client request to flush runtime journal. Jan 17 12:17:24.926896 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:24.930627 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:24.936905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:24.939124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:24.941013 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:17:24.942626 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:17:24.965050 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:17:24.978108 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:17:24.980200 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:17:25.003405 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:17:25.015113 udevadm[1620]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:17:25.043678 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jan 17 12:17:25.043720 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Jan 17 12:17:25.063793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:25.070009 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:25.088076 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:17:25.144950 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:17:25.158319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:25.190606 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 12:17:25.190636 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 12:17:25.204182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:26.051745 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:17:26.060096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:26.099499 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Jan 17 12:17:26.193863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:26.206379 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:26.255490 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:17:26.300076 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:17:26.306858 (udev-worker)[1651]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:26.362675 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:17:26.450722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:17:26.450795 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 17 12:17:26.461717 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:17:26.474803 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:17:26.483740 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 17 12:17:26.499818 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:17:26.509014 systemd-networkd[1647]: lo: Link UP Jan 17 12:17:26.509024 systemd-networkd[1647]: lo: Gained carrier Jan 17 12:17:26.511661 systemd-networkd[1647]: Enumeration completed Jan 17 12:17:26.516473 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:26.518945 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:26.519016 systemd-networkd[1647]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:26.524181 systemd-networkd[1647]: eth0: Link UP Jan 17 12:17:26.524927 systemd-networkd[1647]: eth0: Gained carrier Jan 17 12:17:26.525370 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:26.532962 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:17:26.542780 systemd-networkd[1647]: eth0: DHCPv4 address 172.31.23.9/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:17:26.557715 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:26.574751 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1644) Jan 17 12:17:26.591256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:26.763871 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:26.792071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:17:26.942319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:26.944641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:26.963263 lvm[1765]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:26.998335 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:27.000188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:27.009886 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:17:27.015303 lvm[1770]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:27.044165 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:17:27.046196 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:27.047772 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:17:27.047799 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:27.049118 systemd[1]: Reached target machines.target - Containers. Jan 17 12:17:27.051498 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:17:27.057014 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:17:27.070903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:17:27.075327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:27.087673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:17:27.106400 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:17:27.122089 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:17:27.135815 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:17:27.146190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:17:27.168869 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:17:27.170302 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:17:27.176748 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:17:27.270717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:17:27.303893 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:17:27.404723 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:17:27.520723 kernel: loop3: detected capacity change from 0 to 61336 Jan 17 12:17:27.624770 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:17:27.664900 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:17:27.707632 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:17:27.739718 kernel: loop7: detected capacity change from 0 to 61336 Jan 17 12:17:27.758497 (sd-merge)[1794]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:17:27.763248 (sd-merge)[1794]: Merged extensions into '/usr'. Jan 17 12:17:27.768665 systemd[1]: Reloading requested from client PID 1778 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:17:27.768685 systemd[1]: Reloading... Jan 17 12:17:27.875719 zram_generator::config[1825]: No configuration found. Jan 17 12:17:28.098426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:28.234341 systemd[1]: Reloading finished in 464 ms. Jan 17 12:17:28.239980 systemd-networkd[1647]: eth0: Gained IPv6LL Jan 17 12:17:28.257480 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:17:28.260869 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:17:28.278009 systemd[1]: Starting ensure-sysext.service... Jan 17 12:17:28.289041 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:28.301895 systemd[1]: Reloading requested from client PID 1878 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:17:28.301917 systemd[1]: Reloading... Jan 17 12:17:28.325449 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:17:28.326263 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:17:28.328441 systemd-tmpfiles[1879]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:17:28.329062 systemd-tmpfiles[1879]: ACLs are not supported, ignoring. Jan 17 12:17:28.329251 systemd-tmpfiles[1879]: ACLs are not supported, ignoring. Jan 17 12:17:28.334976 systemd-tmpfiles[1879]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:28.334990 systemd-tmpfiles[1879]: Skipping /boot Jan 17 12:17:28.351124 systemd-tmpfiles[1879]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:28.351253 systemd-tmpfiles[1879]: Skipping /boot Jan 17 12:17:28.472720 zram_generator::config[1909]: No configuration found. Jan 17 12:17:28.647640 ldconfig[1774]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:17:28.682018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:28.783896 systemd[1]: Reloading finished in 481 ms. Jan 17 12:17:28.798753 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:17:28.810311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:28.823928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:28.833026 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:17:28.838873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:17:28.863880 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:28.879556 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:17:28.901863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:28.902166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:28.910763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:28.931279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:28.953224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:28.957958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:28.958255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:28.962208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:28.968990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:28.985164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:28.985954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:29.000762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:29.021852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:29.022070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:29.028175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:17:29.034355 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:17:29.039231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:29.039543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:29.043569 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:29.043812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:29.047292 augenrules[1997]: No rules Jan 17 12:17:29.049381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:29.051221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:29.059671 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:29.076574 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:29.077843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:29.085006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:29.097024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:29.103025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:29.117880 systemd-resolved[1972]: Positive Trust Anchors: Jan 17 12:17:29.118418 systemd-resolved[1972]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:29.118486 systemd-resolved[1972]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:29.128102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:29.129685 systemd-resolved[1972]: Defaulting to hostname 'linux'. Jan 17 12:17:29.131775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:29.132174 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:29.145061 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:17:29.146748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:29.148667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:29.152064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:17:29.155513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:29.158906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:29.160931 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:29.161183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:29.162994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:29.163157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:29.165262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:29.165470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:29.178269 systemd[1]: Finished ensure-sysext.service. Jan 17 12:17:29.193137 systemd[1]: Reached target network.target - Network. Jan 17 12:17:29.195429 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:17:29.197441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:29.199119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:29.199265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:29.199300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:29.199963 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:17:29.206800 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:29.209393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:29.212039 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:29.213913 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:29.215324 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:29.217081 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:29.218730 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:29.218765 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:29.219979 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:29.223261 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:29.229631 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:17:29.233165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:17:29.236283 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:17:29.237939 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:29.239087 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:29.240518 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:17:29.240559 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:29.240584 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:29.249779 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:17:29.267032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:17:29.278209 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:17:29.281844 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:17:29.290050 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:17:29.292814 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:17:29.299863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:29.311232 jq[2039]: false Jan 17 12:17:29.330804 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:17:29.353991 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:17:29.361884 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:17:29.376842 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:17:29.390159 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:17:29.402982 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:17:29.431591 dbus-daemon[2038]: [system] SELinux support is enabled Jan 17 12:17:29.431911 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:17:29.437712 extend-filesystems[2040]: Found loop4 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found loop5 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found loop6 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found loop7 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p1 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p2 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p3 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found usr Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p4 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p6 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p7 Jan 17 12:17:29.437712 extend-filesystems[2040]: Found nvme0n1p9 Jan 17 12:17:29.437712 extend-filesystems[2040]: Checking size of /dev/nvme0n1p9 Jan 17 12:17:29.440924 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:17:29.449035 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:17:29.465055 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:17:29.467864 dbus-daemon[2038]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1647 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:17:29.484018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:17:29.494069 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:17:29.500242 ntpd[2043]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: ---------------------------------------------------- Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: corporation. Support and training for ntp-4 are Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: available at https://www.nwtime.org/support Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: ---------------------------------------------------- Jan 17 12:17:29.507926 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: proto: precision = 0.068 usec (-24) Jan 17 12:17:29.500269 ntpd[2043]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:17:29.521140 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: basedate set to 2025-01-05 Jan 17 12:17:29.521140 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: gps base set to 2025-01-05 (week 2348) Jan 17 12:17:29.521140 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:17:29.521140 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:17:29.514221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:17:29.500279 ntpd[2043]: ---------------------------------------------------- Jan 17 12:17:29.514604 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:17:29.500288 ntpd[2043]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:17:29.500297 ntpd[2043]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:17:29.500307 ntpd[2043]: corporation. Support and training for ntp-4 are Jan 17 12:17:29.500317 ntpd[2043]: available at https://www.nwtime.org/support Jan 17 12:17:29.540863 extend-filesystems[2040]: Resized partition /dev/nvme0n1p9 Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen normally on 3 eth0 172.31.23.9:123 Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen normally on 4 lo [::1]:123 Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listen normally on 5 eth0 [fe80::463:b2ff:fe5d:965d%2]:123 Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: Listening on routing socket on fd #22 for interface updates Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:29.545197 ntpd[2043]: 17 Jan 12:17:29 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:29.500327 ntpd[2043]: ---------------------------------------------------- Jan 17 12:17:29.545574 extend-filesystems[2081]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:17:29.575914 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:17:29.507588 ntpd[2043]: proto: precision = 0.068 usec (-24) Jan 17 12:17:29.550211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:17:29.510069 ntpd[2043]: basedate set to 2025-01-05 Jan 17 12:17:29.550545 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:17:29.510091 ntpd[2043]: gps base set to 2025-01-05 (week 2348) Jan 17 12:17:29.519452 ntpd[2043]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:17:29.519508 ntpd[2043]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:17:29.521792 ntpd[2043]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:17:29.521848 ntpd[2043]: Listen normally on 3 eth0 172.31.23.9:123 Jan 17 12:17:29.521892 ntpd[2043]: Listen normally on 4 lo [::1]:123 Jan 17 12:17:29.521941 ntpd[2043]: Listen normally on 5 eth0 [fe80::463:b2ff:fe5d:965d%2]:123 Jan 17 12:17:29.521982 ntpd[2043]: Listening on routing socket on fd #22 for interface updates Jan 17 12:17:29.533927 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:29.533962 ntpd[2043]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:29.578332 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:17:29.587066 jq[2068]: true Jan 17 12:17:29.578988 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:17:29.650144 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:17:29.667128 update_engine[2061]: I20250117 12:17:29.665227 2061 main.cc:92] Flatcar Update Engine starting Jan 17 12:17:29.686739 coreos-metadata[2036]: Jan 17 12:17:29.673 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:17:29.686739 coreos-metadata[2036]: Jan 17 12:17:29.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:17:29.702292 coreos-metadata[2036]: Jan 17 12:17:29.693 INFO Fetch successful Jan 17 12:17:29.702292 coreos-metadata[2036]: Jan 17 12:17:29.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:17:29.733622 coreos-metadata[2036]: Jan 17 12:17:29.715 INFO Fetch successful Jan 17 12:17:29.733622 coreos-metadata[2036]: Jan 17 12:17:29.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:17:29.731585 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:17:29.745651 coreos-metadata[2036]: Jan 17 12:17:29.740 INFO Fetch successful Jan 17 12:17:29.745651 coreos-metadata[2036]: Jan 17 12:17:29.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:17:29.745269 (ntainerd)[2097]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:17:29.757893 coreos-metadata[2036]: Jan 17 12:17:29.749 INFO Fetch successful Jan 17 12:17:29.761116 coreos-metadata[2036]: Jan 17 12:17:29.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:17:29.766802 update_engine[2061]: I20250117 12:17:29.764662 2061 update_check_scheduler.cc:74] Next update check in 10m6s Jan 17 12:17:29.773770 tar[2076]: linux-amd64/helm Jan 17 12:17:29.776929 coreos-metadata[2036]: Jan 17 12:17:29.772 INFO Fetch failed with 404: resource not found Jan 17 12:17:29.776929 coreos-metadata[2036]: Jan 17 12:17:29.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:17:29.775881 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:17:29.779487 jq[2086]: true Jan 17 12:17:29.785717 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:17:29.785851 coreos-metadata[2036]: Jan 17 12:17:29.784 INFO Fetch successful Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.790 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.790 INFO Fetch successful Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.792 INFO Fetch successful Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.792 INFO Fetch successful Jan 17 12:17:29.793027 coreos-metadata[2036]: Jan 17 12:17:29.793 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:17:29.810456 coreos-metadata[2036]: Jan 17 12:17:29.793 INFO Fetch successful Jan 17 12:17:29.819751 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:17:29.819819 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:17:29.830080 extend-filesystems[2081]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:17:29.830080 extend-filesystems[2081]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:17:29.830080 extend-filesystems[2081]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:17:29.835776 extend-filesystems[2040]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:17:29.859946 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:17:29.862912 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:17:29.862946 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:17:29.865591 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:17:29.879024 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:17:29.889501 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:17:29.889873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:17:29.892131 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:17:29.926781 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:17:29.945995 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:17:29.948131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:17:30.030913 systemd-logind[2059]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:17:30.031339 systemd-logind[2059]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:17:30.031365 systemd-logind[2059]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:17:30.031969 systemd-logind[2059]: New seat seat0. Jan 17 12:17:30.033084 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:17:30.110085 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2144) Jan 17 12:17:30.114565 bash[2154]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:30.116182 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:17:30.130183 systemd[1]: Starting sshkeys.service... Jan 17 12:17:30.214831 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:17:30.227347 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:17:30.427754 amazon-ssm-agent[2134]: Initializing new seelog logger Jan 17 12:17:30.427754 amazon-ssm-agent[2134]: New Seelog Logger Creation Complete Jan 17 12:17:30.427754 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.427754 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.427754 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 processing appconfig overrides Jan 17 12:17:30.432083 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.432083 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.432327 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 processing appconfig overrides Jan 17 12:17:30.432650 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.432650 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.432760 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 processing appconfig overrides Jan 17 12:17:30.445022 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO Proxy environment variables: Jan 17 12:17:30.456230 coreos-metadata[2169]: Jan 17 12:17:30.454 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:17:30.460199 coreos-metadata[2169]: Jan 17 12:17:30.456 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:17:30.460199 coreos-metadata[2169]: Jan 17 12:17:30.459 INFO Fetch successful Jan 17 12:17:30.460199 coreos-metadata[2169]: Jan 17 12:17:30.459 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:17:30.460403 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.460403 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:30.460403 amazon-ssm-agent[2134]: 2025/01/17 12:17:30 processing appconfig overrides Jan 17 12:17:30.463119 coreos-metadata[2169]: Jan 17 12:17:30.463 INFO Fetch successful Jan 17 12:17:30.469791 unknown[2169]: wrote ssh authorized keys file for user: core Jan 17 12:17:30.475588 sshd_keygen[2073]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:17:30.512671 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:17:30.512896 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:17:30.517524 update-ssh-keys[2210]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:30.522960 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:17:30.538487 systemd[1]: Finished sshkeys.service. Jan 17 12:17:30.554779 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO https_proxy: Jan 17 12:17:30.555165 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:17:30.558127 dbus-daemon[2038]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2122 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:17:30.584187 locksmithd[2125]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:17:30.585166 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:17:30.601930 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:17:30.645086 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:17:30.645437 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:17:30.660929 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:17:30.667852 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO http_proxy: Jan 17 12:17:30.692797 polkitd[2230]: Started polkitd version 121 Jan 17 12:17:30.744123 polkitd[2230]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:17:30.744237 polkitd[2230]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:17:30.752317 polkitd[2230]: Finished loading, compiling and executing 2 rules Jan 17 12:17:30.754477 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:17:30.753568 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:17:30.754498 polkitd[2230]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:17:30.785105 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:17:30.801974 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO no_proxy: Jan 17 12:17:30.804981 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:17:30.836331 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:17:30.838107 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:17:30.853879 systemd-hostnamed[2122]: Hostname set to (transient) Jan 17 12:17:30.854023 systemd-resolved[1972]: System hostname changed to 'ip-172-31-23-9'. Jan 17 12:17:30.899717 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:17:30.996391 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:17:31.094827 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO Agent will take identity from EC2 Jan 17 12:17:31.109721 containerd[2097]: time="2025-01-17T12:17:31.107128052Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:17:31.196765 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:31.211097 containerd[2097]: time="2025-01-17T12:17:31.211014764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.213557 containerd[2097]: time="2025-01-17T12:17:31.213505001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:31.213715 containerd[2097]: time="2025-01-17T12:17:31.213684805Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:17:31.213798 containerd[2097]: time="2025-01-17T12:17:31.213784015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:17:31.214045 containerd[2097]: time="2025-01-17T12:17:31.214026606Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:17:31.215470 containerd[2097]: time="2025-01-17T12:17:31.215295205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.217556 containerd[2097]: time="2025-01-17T12:17:31.217512186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:31.217708 containerd[2097]: time="2025-01-17T12:17:31.217668133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.218397 containerd[2097]: time="2025-01-17T12:17:31.218367531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:31.218508 containerd[2097]: time="2025-01-17T12:17:31.218491476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.218705 containerd[2097]: time="2025-01-17T12:17:31.218666621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:31.218793 containerd[2097]: time="2025-01-17T12:17:31.218777787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.219034 containerd[2097]: time="2025-01-17T12:17:31.219000588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.219959 containerd[2097]: time="2025-01-17T12:17:31.219934343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:31.220348 containerd[2097]: time="2025-01-17T12:17:31.220324015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:31.220451 containerd[2097]: time="2025-01-17T12:17:31.220434647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:17:31.220653 containerd[2097]: time="2025-01-17T12:17:31.220635812Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:17:31.220838 containerd[2097]: time="2025-01-17T12:17:31.220822034Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:17:31.232154 containerd[2097]: time="2025-01-17T12:17:31.232106931Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:17:31.232282 containerd[2097]: time="2025-01-17T12:17:31.232187528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:17:31.232282 containerd[2097]: time="2025-01-17T12:17:31.232209379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:17:31.232282 containerd[2097]: time="2025-01-17T12:17:31.232228925Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:17:31.232282 containerd[2097]: time="2025-01-17T12:17:31.232247662Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:17:31.232716 containerd[2097]: time="2025-01-17T12:17:31.232445443Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.232958519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233109912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233133274Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233152901Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233173155Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233195420Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233214354Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233241585Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233263464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233351601Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233371427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233389299Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233417280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.234657 containerd[2097]: time="2025-01-17T12:17:31.233437244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233454963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233474583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233497874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233519490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233537295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233555664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233574414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233599687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233617220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233638546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233656940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.233680133Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.234601820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.234643650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.235555 containerd[2097]: time="2025-01-17T12:17:31.234676574Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.234845304Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.234889845Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.234908003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.234940577Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.235027905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.235055898Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.235170350Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:17:31.236269 containerd[2097]: time="2025-01-17T12:17:31.235192695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:17:31.236560 containerd[2097]: time="2025-01-17T12:17:31.235769097Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:17:31.236560 containerd[2097]: time="2025-01-17T12:17:31.235887145Z" level=info msg="Connect containerd service" Jan 17 12:17:31.236560 containerd[2097]: time="2025-01-17T12:17:31.235954097Z" level=info msg="using legacy CRI server" Jan 17 12:17:31.236560 containerd[2097]: time="2025-01-17T12:17:31.235965702Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:17:31.236560 containerd[2097]: time="2025-01-17T12:17:31.236371259Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.238915021Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.240007470Z" level=info msg="Start subscribing containerd event" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.240622186Z" level=info msg="Start recovering state" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241087236Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241144032Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241821067Z" level=info msg="Start event monitor" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241862226Z" level=info msg="Start snapshots syncer" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241877128Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.241894124Z" level=info msg="Start streaming server" Jan 17 12:17:31.244585 containerd[2097]: time="2025-01-17T12:17:31.242877496Z" level=info msg="containerd successfully booted in 0.139962s" Jan 17 12:17:31.242731 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [Registrar] Starting registrar module Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:31 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:31 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:31 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:17:31.289505 amazon-ssm-agent[2134]: 2025-01-17 12:17:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:17:31.297736 amazon-ssm-agent[2134]: 2025-01-17 12:17:31 INFO [CredentialRefresher] Next credential rotation will be in 31.608324540016667 minutes Jan 17 12:17:31.471426 tar[2076]: linux-amd64/LICENSE Jan 17 12:17:31.478657 tar[2076]: linux-amd64/README.md Jan 17 12:17:31.517179 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:17:32.321145 amazon-ssm-agent[2134]: 2025-01-17 12:17:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:17:32.425557 amazon-ssm-agent[2134]: 2025-01-17 12:17:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2322) started Jan 17 12:17:32.522946 amazon-ssm-agent[2134]: 2025-01-17 12:17:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:17:32.640932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:32.643852 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:17:32.646530 systemd[1]: Startup finished in 8.938s (kernel) + 9.737s (userspace) = 18.675s. Jan 17 12:17:32.829668 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:34.261104 kubelet[2340]: E0117 12:17:34.260984 2340 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:34.264284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:34.264616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:36.885073 systemd-resolved[1972]: Clock change detected. Flushing caches. Jan 17 12:17:37.576033 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:17:37.584591 systemd[1]: Started sshd@0-172.31.23.9:22-139.178.89.65:44166.service - OpenSSH per-connection server daemon (139.178.89.65:44166). Jan 17 12:17:37.789660 sshd[2354]: Accepted publickey for core from 139.178.89.65 port 44166 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:37.791305 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:37.812677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:17:37.819645 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:17:37.823940 systemd-logind[2059]: New session 1 of user core. Jan 17 12:17:37.839619 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:17:37.851779 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:17:37.866454 (systemd)[2360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:17:38.080222 systemd[2360]: Queued start job for default target default.target. Jan 17 12:17:38.080987 systemd[2360]: Created slice app.slice - User Application Slice. Jan 17 12:17:38.081023 systemd[2360]: Reached target paths.target - Paths. Jan 17 12:17:38.081042 systemd[2360]: Reached target timers.target - Timers. Jan 17 12:17:38.087969 systemd[2360]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:17:38.098118 systemd[2360]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:17:38.098392 systemd[2360]: Reached target sockets.target - Sockets. Jan 17 12:17:38.098421 systemd[2360]: Reached target basic.target - Basic System. Jan 17 12:17:38.098486 systemd[2360]: Reached target default.target - Main User Target. Jan 17 12:17:38.098529 systemd[2360]: Startup finished in 222ms. Jan 17 12:17:38.099206 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:17:38.114327 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:17:38.276743 systemd[1]: Started sshd@1-172.31.23.9:22-139.178.89.65:44180.service - OpenSSH per-connection server daemon (139.178.89.65:44180). Jan 17 12:17:38.438484 sshd[2372]: Accepted publickey for core from 139.178.89.65 port 44180 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:38.441762 sshd[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:38.461737 systemd-logind[2059]: New session 2 of user core. Jan 17 12:17:38.473345 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:17:38.596398 sshd[2372]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:38.601129 systemd[1]: sshd@1-172.31.23.9:22-139.178.89.65:44180.service: Deactivated successfully. Jan 17 12:17:38.614889 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:17:38.621993 systemd-logind[2059]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:17:38.632305 systemd[1]: Started sshd@2-172.31.23.9:22-139.178.89.65:44182.service - OpenSSH per-connection server daemon (139.178.89.65:44182). Jan 17 12:17:38.633547 systemd-logind[2059]: Removed session 2. Jan 17 12:17:38.799035 sshd[2380]: Accepted publickey for core from 139.178.89.65 port 44182 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:38.800102 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:38.811950 systemd-logind[2059]: New session 3 of user core. Jan 17 12:17:38.817145 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:17:38.948973 sshd[2380]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:38.954749 systemd[1]: sshd@2-172.31.23.9:22-139.178.89.65:44182.service: Deactivated successfully. Jan 17 12:17:38.961485 systemd-logind[2059]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:17:38.962023 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:17:38.963249 systemd-logind[2059]: Removed session 3. Jan 17 12:17:38.985969 systemd[1]: Started sshd@3-172.31.23.9:22-139.178.89.65:44192.service - OpenSSH per-connection server daemon (139.178.89.65:44192). Jan 17 12:17:39.158732 sshd[2388]: Accepted publickey for core from 139.178.89.65 port 44192 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:39.160602 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:39.170906 systemd-logind[2059]: New session 4 of user core. Jan 17 12:17:39.176287 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:17:39.301129 sshd[2388]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:39.308061 systemd[1]: sshd@3-172.31.23.9:22-139.178.89.65:44192.service: Deactivated successfully. Jan 17 12:17:39.313133 systemd-logind[2059]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:17:39.314926 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:17:39.317507 systemd-logind[2059]: Removed session 4. Jan 17 12:17:39.333790 systemd[1]: Started sshd@4-172.31.23.9:22-139.178.89.65:44196.service - OpenSSH per-connection server daemon (139.178.89.65:44196). Jan 17 12:17:39.499180 sshd[2396]: Accepted publickey for core from 139.178.89.65 port 44196 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:39.501727 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:39.507945 systemd-logind[2059]: New session 5 of user core. Jan 17 12:17:39.513181 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:17:39.664211 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:17:39.666278 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:39.694871 sudo[2400]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:39.717961 sshd[2396]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:39.722991 systemd[1]: sshd@4-172.31.23.9:22-139.178.89.65:44196.service: Deactivated successfully. Jan 17 12:17:39.731198 systemd-logind[2059]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:17:39.731231 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:17:39.733568 systemd-logind[2059]: Removed session 5. Jan 17 12:17:39.746284 systemd[1]: Started sshd@5-172.31.23.9:22-139.178.89.65:44212.service - OpenSSH per-connection server daemon (139.178.89.65:44212). Jan 17 12:17:39.904987 sshd[2405]: Accepted publickey for core from 139.178.89.65 port 44212 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:39.911796 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:39.936683 systemd-logind[2059]: New session 6 of user core. Jan 17 12:17:39.949502 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:17:40.067900 sudo[2410]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:17:40.068388 sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:40.082237 sudo[2410]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:40.105692 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:17:40.109244 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:40.144924 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:40.175408 auditctl[2413]: No rules Jan 17 12:17:40.175994 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:17:40.176345 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:40.198947 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:40.253947 augenrules[2432]: No rules Jan 17 12:17:40.255968 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:40.261093 sudo[2409]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:40.284112 sshd[2405]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:40.291625 systemd-logind[2059]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:17:40.294279 systemd[1]: sshd@5-172.31.23.9:22-139.178.89.65:44212.service: Deactivated successfully. Jan 17 12:17:40.299264 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:17:40.300446 systemd-logind[2059]: Removed session 6. Jan 17 12:17:40.317385 systemd[1]: Started sshd@6-172.31.23.9:22-139.178.89.65:44216.service - OpenSSH per-connection server daemon (139.178.89.65:44216). Jan 17 12:17:40.512109 sshd[2441]: Accepted publickey for core from 139.178.89.65 port 44216 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:40.513438 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:40.530424 systemd-logind[2059]: New session 7 of user core. Jan 17 12:17:40.537199 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:17:40.640978 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:17:40.641379 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:41.300348 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:17:41.303392 (dockerd)[2460]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:17:41.944694 dockerd[2460]: time="2025-01-17T12:17:41.944644906Z" level=info msg="Starting up" Jan 17 12:17:42.428613 dockerd[2460]: time="2025-01-17T12:17:42.427623471Z" level=info msg="Loading containers: start." Jan 17 12:17:42.596866 kernel: Initializing XFRM netlink socket Jan 17 12:17:42.640392 (udev-worker)[2481]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:42.729352 systemd-networkd[1647]: docker0: Link UP Jan 17 12:17:42.754343 dockerd[2460]: time="2025-01-17T12:17:42.753302938Z" level=info msg="Loading containers: done." Jan 17 12:17:42.787808 dockerd[2460]: time="2025-01-17T12:17:42.787754486Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:17:42.788011 dockerd[2460]: time="2025-01-17T12:17:42.787886860Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:17:42.788061 dockerd[2460]: time="2025-01-17T12:17:42.788022457Z" level=info msg="Daemon has completed initialization" Jan 17 12:17:42.833023 dockerd[2460]: time="2025-01-17T12:17:42.831929275Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:17:42.832300 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:17:44.320852 containerd[2097]: time="2025-01-17T12:17:44.320801305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:17:44.898831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:44.913878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:44.964199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442052466.mount: Deactivated successfully. Jan 17 12:17:45.249259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:45.253767 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:45.412296 kubelet[2629]: E0117 12:17:45.412238 2629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:45.427751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:45.428192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:47.490550 containerd[2097]: time="2025-01-17T12:17:47.490499557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.491871 containerd[2097]: time="2025-01-17T12:17:47.491808056Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:17:47.493352 containerd[2097]: time="2025-01-17T12:17:47.492954282Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.496240 containerd[2097]: time="2025-01-17T12:17:47.496204381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.499498 containerd[2097]: time="2025-01-17T12:17:47.499451077Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.178605491s" Jan 17 12:17:47.499605 containerd[2097]: time="2025-01-17T12:17:47.499509296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:17:47.532000 containerd[2097]: time="2025-01-17T12:17:47.531962061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:17:49.972561 containerd[2097]: time="2025-01-17T12:17:49.972513048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.974339 containerd[2097]: time="2025-01-17T12:17:49.974258117Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:17:49.975724 containerd[2097]: time="2025-01-17T12:17:49.975305526Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.978083 containerd[2097]: time="2025-01-17T12:17:49.978045887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.979567 containerd[2097]: time="2025-01-17T12:17:49.979527715Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.447524245s" Jan 17 12:17:49.979706 containerd[2097]: time="2025-01-17T12:17:49.979574457Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:17:50.013400 containerd[2097]: time="2025-01-17T12:17:50.013365358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:17:51.574152 containerd[2097]: time="2025-01-17T12:17:51.574104824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:51.575643 containerd[2097]: time="2025-01-17T12:17:51.575476753Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:17:51.576861 containerd[2097]: time="2025-01-17T12:17:51.576620146Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:51.579576 containerd[2097]: time="2025-01-17T12:17:51.579489045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:51.581639 containerd[2097]: time="2025-01-17T12:17:51.580683889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.567108878s" Jan 17 12:17:51.581639 containerd[2097]: time="2025-01-17T12:17:51.580729308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:17:51.604368 containerd[2097]: time="2025-01-17T12:17:51.604334313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:17:52.975541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837613422.mount: Deactivated successfully. Jan 17 12:17:53.576357 containerd[2097]: time="2025-01-17T12:17:53.576302017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:53.577633 containerd[2097]: time="2025-01-17T12:17:53.577486702Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:17:53.578960 containerd[2097]: time="2025-01-17T12:17:53.578926341Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:53.581250 containerd[2097]: time="2025-01-17T12:17:53.581187424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:53.582022 containerd[2097]: time="2025-01-17T12:17:53.581828241Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.977454538s" Jan 17 12:17:53.582022 containerd[2097]: time="2025-01-17T12:17:53.581878897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:17:53.606237 containerd[2097]: time="2025-01-17T12:17:53.606197077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:17:54.173666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801442485.mount: Deactivated successfully. Jan 17 12:17:55.428169 containerd[2097]: time="2025-01-17T12:17:55.428115370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.430109 containerd[2097]: time="2025-01-17T12:17:55.430058005Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:17:55.433470 containerd[2097]: time="2025-01-17T12:17:55.431231652Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.435647 containerd[2097]: time="2025-01-17T12:17:55.435608738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.437255 containerd[2097]: time="2025-01-17T12:17:55.437214353Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.830977124s" Jan 17 12:17:55.437349 containerd[2097]: time="2025-01-17T12:17:55.437264662Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:17:55.468766 containerd[2097]: time="2025-01-17T12:17:55.468737850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:17:55.678529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:17:55.686546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:56.032074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:56.051436 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:56.092638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678608108.mount: Deactivated successfully. Jan 17 12:17:56.102666 containerd[2097]: time="2025-01-17T12:17:56.102055237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.108176 containerd[2097]: time="2025-01-17T12:17:56.108111780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:17:56.119864 containerd[2097]: time="2025-01-17T12:17:56.117315537Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.127713 containerd[2097]: time="2025-01-17T12:17:56.127677598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.128996 containerd[2097]: time="2025-01-17T12:17:56.128369088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 659.412545ms" Jan 17 12:17:56.129162 containerd[2097]: time="2025-01-17T12:17:56.129140477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:17:56.170817 containerd[2097]: time="2025-01-17T12:17:56.170780705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:17:56.182587 kubelet[2773]: E0117 12:17:56.181151 2773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:56.189544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:56.190226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:56.922980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005922523.mount: Deactivated successfully. Jan 17 12:17:59.532497 containerd[2097]: time="2025-01-17T12:17:59.532405048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:59.534286 containerd[2097]: time="2025-01-17T12:17:59.534026951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:17:59.536572 containerd[2097]: time="2025-01-17T12:17:59.535177405Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:59.538319 containerd[2097]: time="2025-01-17T12:17:59.538274562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:59.539681 containerd[2097]: time="2025-01-17T12:17:59.539645383Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.368818719s" Jan 17 12:17:59.539822 containerd[2097]: time="2025-01-17T12:17:59.539803196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:18:01.279811 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:18:05.370487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:05.378207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:05.414887 systemd[1]: Reloading requested from client PID 2911 ('systemctl') (unit session-7.scope)... Jan 17 12:18:05.415030 systemd[1]: Reloading... Jan 17 12:18:05.576937 zram_generator::config[2952]: No configuration found. Jan 17 12:18:05.762608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:05.847011 systemd[1]: Reloading finished in 431 ms. Jan 17 12:18:05.893498 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:18:05.893626 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:18:05.894048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:05.897235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:06.173066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:06.175027 (kubelet)[3020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:06.249057 kubelet[3020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:06.249057 kubelet[3020]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:06.249057 kubelet[3020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:06.252791 kubelet[3020]: I0117 12:18:06.252670 3020 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:06.833649 kubelet[3020]: I0117 12:18:06.833605 3020 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:18:06.833649 kubelet[3020]: I0117 12:18:06.833647 3020 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:06.834011 kubelet[3020]: I0117 12:18:06.833988 3020 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:18:06.872584 kubelet[3020]: I0117 12:18:06.872545 3020 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:06.875025 kubelet[3020]: E0117 12:18:06.874992 3020 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.892326 kubelet[3020]: I0117 12:18:06.892291 3020 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:06.896732 kubelet[3020]: I0117 12:18:06.896684 3020 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:06.898336 kubelet[3020]: I0117 12:18:06.898296 3020 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:06.898676 kubelet[3020]: I0117 12:18:06.898434 3020 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:06.898676 kubelet[3020]: I0117 12:18:06.898454 3020 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:06.898676 kubelet[3020]: I0117 12:18:06.898662 3020 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:06.898815 kubelet[3020]: I0117 12:18:06.898800 3020 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:18:06.898866 kubelet[3020]: I0117 12:18:06.898820 3020 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:06.898910 kubelet[3020]: I0117 12:18:06.898878 3020 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:06.898910 kubelet[3020]: I0117 12:18:06.898902 3020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:06.901744 kubelet[3020]: W0117 12:18:06.901678 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.902032 kubelet[3020]: E0117 12:18:06.901756 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.903565 kubelet[3020]: W0117 12:18:06.903320 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.903655 kubelet[3020]: E0117 12:18:06.903576 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.903762 kubelet[3020]: I0117 12:18:06.903734 3020 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:06.912871 kubelet[3020]: I0117 12:18:06.912651 3020 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:06.915865 kubelet[3020]: W0117 12:18:06.915436 3020 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:18:06.919523 kubelet[3020]: I0117 12:18:06.919487 3020 server.go:1256] "Started kubelet" Jan 17 12:18:06.928642 kubelet[3020]: I0117 12:18:06.928599 3020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:06.934006 kubelet[3020]: E0117 12:18:06.933976 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.9:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-9.181b7a0f2f614907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-9,UID:ip-172-31-23-9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-9,},FirstTimestamp:2025-01-17 12:18:06.919461127 +0000 UTC m=+0.737496298,LastTimestamp:2025-01-17 12:18:06.919461127 +0000 UTC m=+0.737496298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-9,}" Jan 17 12:18:06.939696 kubelet[3020]: I0117 12:18:06.939667 3020 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:06.942743 kubelet[3020]: I0117 12:18:06.941242 3020 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:18:06.943224 kubelet[3020]: I0117 12:18:06.943199 3020 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:06.944877 kubelet[3020]: E0117 12:18:06.944337 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="200ms" Jan 17 12:18:06.944877 kubelet[3020]: I0117 12:18:06.943251 3020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:06.944877 kubelet[3020]: I0117 12:18:06.944575 3020 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:06.946810 kubelet[3020]: I0117 12:18:06.946785 3020 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:18:06.947332 kubelet[3020]: W0117 12:18:06.947281 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.947407 kubelet[3020]: E0117 12:18:06.947345 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.947827 kubelet[3020]: I0117 12:18:06.947801 3020 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:06.949124 kubelet[3020]: I0117 12:18:06.948939 3020 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:06.949320 kubelet[3020]: I0117 12:18:06.949225 3020 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:18:06.952047 kubelet[3020]: E0117 12:18:06.952026 3020 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:06.952459 kubelet[3020]: I0117 12:18:06.952441 3020 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:06.987920 kubelet[3020]: I0117 12:18:06.987739 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:06.991304 kubelet[3020]: I0117 12:18:06.991252 3020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:06.993177 kubelet[3020]: I0117 12:18:06.991801 3020 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:06.993177 kubelet[3020]: I0117 12:18:06.991849 3020 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:18:06.993177 kubelet[3020]: E0117 12:18:06.991920 3020 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:06.996468 kubelet[3020]: W0117 12:18:06.996437 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.996779 kubelet[3020]: E0117 12:18:06.996761 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:06.996978 kubelet[3020]: I0117 12:18:06.996596 3020 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:06.997086 kubelet[3020]: I0117 12:18:06.997076 3020 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:06.997191 kubelet[3020]: I0117 12:18:06.997181 3020 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:07.000629 kubelet[3020]: I0117 12:18:07.000608 3020 policy_none.go:49] "None policy: Start" Jan 17 12:18:07.001654 kubelet[3020]: I0117 12:18:07.001633 3020 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:07.001822 kubelet[3020]: I0117 12:18:07.001813 3020 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:07.011660 kubelet[3020]: I0117 12:18:07.011628 3020 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:07.012711 kubelet[3020]: I0117 12:18:07.012678 3020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:07.016161 kubelet[3020]: E0117 12:18:07.016123 3020 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-9\" not found" Jan 17 12:18:07.045330 kubelet[3020]: I0117 12:18:07.045269 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:07.045881 kubelet[3020]: E0117 12:18:07.045855 3020 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jan 17 12:18:07.092395 kubelet[3020]: I0117 12:18:07.092261 3020 topology_manager.go:215] "Topology Admit Handler" podUID="db9f9c7b8f43a332c849ed9b56408273" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-9" Jan 17 12:18:07.098068 kubelet[3020]: I0117 12:18:07.098038 3020 topology_manager.go:215] "Topology Admit Handler" podUID="cc1f55da61973c9433174a246291ff8e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-9" Jan 17 12:18:07.106445 kubelet[3020]: I0117 12:18:07.106233 3020 topology_manager.go:215] "Topology Admit Handler" podUID="6a49b8d79dd234fb64b3fcf80db7a53a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.145411 kubelet[3020]: E0117 12:18:07.145366 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="400ms" Jan 17 12:18:07.248266 kubelet[3020]: I0117 12:18:07.248232 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:07.248671 kubelet[3020]: E0117 12:18:07.248646 3020 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jan 17 12:18:07.253859 kubelet[3020]: I0117 12:18:07.253663 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.254604 kubelet[3020]: I0117 12:18:07.253893 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-ca-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:07.254604 kubelet[3020]: I0117 12:18:07.254019 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:07.254604 kubelet[3020]: I0117 12:18:07.254066 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:07.254604 kubelet[3020]: I0117 12:18:07.254102 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.254604 kubelet[3020]: I0117 12:18:07.254131 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.254745 kubelet[3020]: I0117 12:18:07.254336 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.254745 kubelet[3020]: I0117 12:18:07.254371 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db9f9c7b8f43a332c849ed9b56408273-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-9\" (UID: \"db9f9c7b8f43a332c849ed9b56408273\") " pod="kube-system/kube-scheduler-ip-172-31-23-9" Jan 17 12:18:07.254745 kubelet[3020]: I0117 12:18:07.254404 3020 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:07.419009 containerd[2097]: time="2025-01-17T12:18:07.418886915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-9,Uid:db9f9c7b8f43a332c849ed9b56408273,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:07.427206 containerd[2097]: time="2025-01-17T12:18:07.427159234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-9,Uid:6a49b8d79dd234fb64b3fcf80db7a53a,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:07.427718 containerd[2097]: time="2025-01-17T12:18:07.427518957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-9,Uid:cc1f55da61973c9433174a246291ff8e,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:07.548662 kubelet[3020]: E0117 12:18:07.548620 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="800ms" Jan 17 12:18:07.651100 kubelet[3020]: I0117 12:18:07.651037 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:07.651412 kubelet[3020]: E0117 12:18:07.651388 3020 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jan 17 12:18:07.899055 kubelet[3020]: W0117 12:18:07.898917 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:07.899055 kubelet[3020]: E0117 12:18:07.898978 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:07.988460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841122486.mount: Deactivated successfully. Jan 17 12:18:07.997129 containerd[2097]: time="2025-01-17T12:18:07.997082891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:07.998396 containerd[2097]: time="2025-01-17T12:18:07.998345047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:18:07.999262 containerd[2097]: time="2025-01-17T12:18:07.999228004Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.001015 containerd[2097]: time="2025-01-17T12:18:08.000970489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:08.001970 containerd[2097]: time="2025-01-17T12:18:08.001932251Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.003676 containerd[2097]: time="2025-01-17T12:18:08.003385048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:08.003676 containerd[2097]: time="2025-01-17T12:18:08.003519111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.009862 containerd[2097]: time="2025-01-17T12:18:08.008319217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.011742 containerd[2097]: time="2025-01-17T12:18:08.011567235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.30697ms" Jan 17 12:18:08.015405 containerd[2097]: time="2025-01-17T12:18:08.015362111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.384835ms" Jan 17 12:18:08.016995 containerd[2097]: time="2025-01-17T12:18:08.016955895Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.371251ms" Jan 17 12:18:08.056415 kubelet[3020]: W0117 12:18:08.056370 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.058160 kubelet[3020]: E0117 12:18:08.058121 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.079055 kubelet[3020]: W0117 12:18:08.079002 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.079354 kubelet[3020]: E0117 12:18:08.079338 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.347408 containerd[2097]: time="2025-01-17T12:18:08.321007662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:08.347408 containerd[2097]: time="2025-01-17T12:18:08.321100373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:08.347408 containerd[2097]: time="2025-01-17T12:18:08.321131302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.347408 containerd[2097]: time="2025-01-17T12:18:08.321288761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.351547 kubelet[3020]: E0117 12:18:08.350091 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="1.6s" Jan 17 12:18:08.355564 containerd[2097]: time="2025-01-17T12:18:08.355383424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:08.355564 containerd[2097]: time="2025-01-17T12:18:08.355511043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:08.355564 containerd[2097]: time="2025-01-17T12:18:08.355527451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.356120 containerd[2097]: time="2025-01-17T12:18:08.355901382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.364560 containerd[2097]: time="2025-01-17T12:18:08.364219454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:08.364560 containerd[2097]: time="2025-01-17T12:18:08.364297395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:08.364560 containerd[2097]: time="2025-01-17T12:18:08.364322755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.364560 containerd[2097]: time="2025-01-17T12:18:08.364432116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:08.389499 kubelet[3020]: W0117 12:18:08.389039 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.389499 kubelet[3020]: E0117 12:18:08.389113 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:08.455081 kubelet[3020]: I0117 12:18:08.455026 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:08.461794 kubelet[3020]: E0117 12:18:08.456772 3020 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jan 17 12:18:08.584311 containerd[2097]: time="2025-01-17T12:18:08.584160909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-9,Uid:6a49b8d79dd234fb64b3fcf80db7a53a,Namespace:kube-system,Attempt:0,} returns sandbox id \"32b7d45ba6ae34365c0cbb060b1ce3d936f3e825335c6b36c2b60af16375cb7d\"" Jan 17 12:18:08.587264 containerd[2097]: time="2025-01-17T12:18:08.587229324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-9,Uid:db9f9c7b8f43a332c849ed9b56408273,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc09e3ee383f20ede7afd1d88c9c020889c16ba8b20201f7f384fe806522af5d\"" Jan 17 12:18:08.587717 containerd[2097]: time="2025-01-17T12:18:08.587682695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-9,Uid:cc1f55da61973c9433174a246291ff8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"78ac1182b1895fea1dbd3536747290ba2c5b12bcd21b8baf7f2c17cfadb95d59\"" Jan 17 12:18:08.594231 containerd[2097]: time="2025-01-17T12:18:08.594054269Z" level=info msg="CreateContainer within sandbox \"78ac1182b1895fea1dbd3536747290ba2c5b12bcd21b8baf7f2c17cfadb95d59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:18:08.594231 containerd[2097]: time="2025-01-17T12:18:08.594058070Z" level=info msg="CreateContainer within sandbox \"cc09e3ee383f20ede7afd1d88c9c020889c16ba8b20201f7f384fe806522af5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:18:08.594869 containerd[2097]: time="2025-01-17T12:18:08.594815002Z" level=info msg="CreateContainer within sandbox \"32b7d45ba6ae34365c0cbb060b1ce3d936f3e825335c6b36c2b60af16375cb7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:18:08.636447 containerd[2097]: time="2025-01-17T12:18:08.636249458Z" level=info msg="CreateContainer within sandbox \"78ac1182b1895fea1dbd3536747290ba2c5b12bcd21b8baf7f2c17cfadb95d59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5e67ad1eb3ec060162e49f25a82e5b610299b83bc8c8dbace158d7f0c98317e\"" Jan 17 12:18:08.640857 containerd[2097]: time="2025-01-17T12:18:08.639742867Z" level=info msg="StartContainer for \"d5e67ad1eb3ec060162e49f25a82e5b610299b83bc8c8dbace158d7f0c98317e\"" Jan 17 12:18:08.652519 containerd[2097]: time="2025-01-17T12:18:08.652475480Z" level=info msg="CreateContainer within sandbox \"cc09e3ee383f20ede7afd1d88c9c020889c16ba8b20201f7f384fe806522af5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43b8dea1c162bb1fc2830b98d072198f3bf42562f8e217e3861fa374bd1d95fe\"" Jan 17 12:18:08.657254 containerd[2097]: time="2025-01-17T12:18:08.657208511Z" level=info msg="CreateContainer within sandbox \"32b7d45ba6ae34365c0cbb060b1ce3d936f3e825335c6b36c2b60af16375cb7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dae6e4b492413937dd7028df694a9a0e87cb2eddec9161c733c60a5144e933f6\"" Jan 17 12:18:08.658541 containerd[2097]: time="2025-01-17T12:18:08.657985342Z" level=info msg="StartContainer for \"43b8dea1c162bb1fc2830b98d072198f3bf42562f8e217e3861fa374bd1d95fe\"" Jan 17 12:18:08.660409 containerd[2097]: time="2025-01-17T12:18:08.660375697Z" level=info msg="StartContainer for \"dae6e4b492413937dd7028df694a9a0e87cb2eddec9161c733c60a5144e933f6\"" Jan 17 12:18:08.699368 kubelet[3020]: E0117 12:18:08.699335 3020 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.9:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-9.181b7a0f2f614907 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-9,UID:ip-172-31-23-9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-9,},FirstTimestamp:2025-01-17 12:18:06.919461127 +0000 UTC m=+0.737496298,LastTimestamp:2025-01-17 12:18:06.919461127 +0000 UTC m=+0.737496298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-9,}" Jan 17 12:18:08.839270 containerd[2097]: time="2025-01-17T12:18:08.838681406Z" level=info msg="StartContainer for \"d5e67ad1eb3ec060162e49f25a82e5b610299b83bc8c8dbace158d7f0c98317e\" returns successfully" Jan 17 12:18:08.855273 containerd[2097]: time="2025-01-17T12:18:08.855225014Z" level=info msg="StartContainer for \"43b8dea1c162bb1fc2830b98d072198f3bf42562f8e217e3861fa374bd1d95fe\" returns successfully" Jan 17 12:18:08.863732 containerd[2097]: time="2025-01-17T12:18:08.862869876Z" level=info msg="StartContainer for \"dae6e4b492413937dd7028df694a9a0e87cb2eddec9161c733c60a5144e933f6\" returns successfully" Jan 17 12:18:08.988094 kubelet[3020]: E0117 12:18:08.987970 3020 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.748540 kubelet[3020]: W0117 12:18:09.748441 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.748540 kubelet[3020]: E0117 12:18:09.748517 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.765581 kubelet[3020]: W0117 12:18:09.765413 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.765581 kubelet[3020]: E0117 12:18:09.765553 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.950956 kubelet[3020]: E0117 12:18:09.950918 3020 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="3.2s" Jan 17 12:18:09.959886 kubelet[3020]: W0117 12:18:09.958626 3020 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:09.959886 kubelet[3020]: E0117 12:18:09.958700 3020 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jan 17 12:18:10.060707 kubelet[3020]: I0117 12:18:10.060288 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:10.060707 kubelet[3020]: E0117 12:18:10.060638 3020 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jan 17 12:18:11.831520 kubelet[3020]: E0117 12:18:11.831486 3020 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-9" not found Jan 17 12:18:11.903703 kubelet[3020]: I0117 12:18:11.903659 3020 apiserver.go:52] "Watching apiserver" Jan 17 12:18:11.947371 kubelet[3020]: I0117 12:18:11.947309 3020 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:18:12.187426 kubelet[3020]: E0117 12:18:12.187318 3020 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-9" not found Jan 17 12:18:12.632500 kubelet[3020]: E0117 12:18:12.632456 3020 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-9" not found Jan 17 12:18:13.162487 kubelet[3020]: E0117 12:18:13.162438 3020 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-9\" not found" node="ip-172-31-23-9" Jan 17 12:18:13.263600 kubelet[3020]: I0117 12:18:13.263570 3020 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:13.275533 kubelet[3020]: I0117 12:18:13.274403 3020 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-9" Jan 17 12:18:14.872204 systemd[1]: Reloading requested from client PID 3298 ('systemctl') (unit session-7.scope)... Jan 17 12:18:14.872224 systemd[1]: Reloading... Jan 17 12:18:14.995464 zram_generator::config[3336]: No configuration found. Jan 17 12:18:15.136981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:15.232289 systemd[1]: Reloading finished in 359 ms. Jan 17 12:18:15.279939 kubelet[3020]: I0117 12:18:15.279882 3020 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:15.280375 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:15.303375 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:18:15.303711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:15.316620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:15.369492 update_engine[2061]: I20250117 12:18:15.369035 2061 update_attempter.cc:509] Updating boot flags... Jan 17 12:18:15.506116 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3411) Jan 17 12:18:15.813051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:15.843417 (kubelet)[3501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:15.981012 kubelet[3501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:15.981012 kubelet[3501]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:15.981012 kubelet[3501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:15.981012 kubelet[3501]: I0117 12:18:15.980344 3501 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:15.985616 kubelet[3501]: I0117 12:18:15.985575 3501 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:18:15.985616 kubelet[3501]: I0117 12:18:15.985602 3501 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:15.985954 kubelet[3501]: I0117 12:18:15.985904 3501 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:18:15.988028 kubelet[3501]: I0117 12:18:15.987994 3501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:18:16.001376 kubelet[3501]: I0117 12:18:15.999815 3501 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:16.016486 kubelet[3501]: I0117 12:18:16.015305 3501 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020087 3501 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020389 3501 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020425 3501 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020441 3501 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020494 3501 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:16.021445 kubelet[3501]: I0117 12:18:16.020655 3501 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:18:16.021975 kubelet[3501]: I0117 12:18:16.020676 3501 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:16.021975 kubelet[3501]: I0117 12:18:16.021334 3501 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:16.021975 kubelet[3501]: I0117 12:18:16.021408 3501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:16.027051 kubelet[3501]: I0117 12:18:16.027019 3501 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:16.027394 kubelet[3501]: I0117 12:18:16.027330 3501 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:16.027865 kubelet[3501]: I0117 12:18:16.027816 3501 server.go:1256] "Started kubelet" Jan 17 12:18:16.033936 kubelet[3501]: I0117 12:18:16.033561 3501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:16.044649 kubelet[3501]: I0117 12:18:16.044617 3501 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:16.045742 kubelet[3501]: I0117 12:18:16.045696 3501 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:18:16.047662 kubelet[3501]: I0117 12:18:16.047316 3501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:16.047662 kubelet[3501]: I0117 12:18:16.047565 3501 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:16.053540 kubelet[3501]: I0117 12:18:16.053229 3501 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:16.062746 kubelet[3501]: I0117 12:18:16.059929 3501 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:18:16.062746 kubelet[3501]: I0117 12:18:16.060110 3501 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:18:16.068078 kubelet[3501]: I0117 12:18:16.067981 3501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:16.072600 kubelet[3501]: I0117 12:18:16.072561 3501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:16.083760 kubelet[3501]: I0117 12:18:16.083701 3501 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:16.083760 kubelet[3501]: I0117 12:18:16.083771 3501 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:18:16.084064 kubelet[3501]: E0117 12:18:16.083878 3501 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:16.104891 kubelet[3501]: I0117 12:18:16.103300 3501 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:16.104891 kubelet[3501]: I0117 12:18:16.103445 3501 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:16.108890 kubelet[3501]: E0117 12:18:16.108866 3501 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:16.110154 kubelet[3501]: I0117 12:18:16.110134 3501 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:16.160662 kubelet[3501]: I0117 12:18:16.160634 3501 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-9" Jan 17 12:18:16.176373 kubelet[3501]: I0117 12:18:16.176340 3501 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-9" Jan 17 12:18:16.176630 kubelet[3501]: I0117 12:18:16.176620 3501 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-9" Jan 17 12:18:16.190562 kubelet[3501]: E0117 12:18:16.190231 3501 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:18:16.218657 kubelet[3501]: I0117 12:18:16.218629 3501 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:16.218830 kubelet[3501]: I0117 12:18:16.218821 3501 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:16.218974 kubelet[3501]: I0117 12:18:16.218952 3501 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:16.219133 kubelet[3501]: I0117 12:18:16.219114 3501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:18:16.219190 kubelet[3501]: I0117 12:18:16.219145 3501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:18:16.219190 kubelet[3501]: I0117 12:18:16.219155 3501 policy_none.go:49] "None policy: Start" Jan 17 12:18:16.221079 kubelet[3501]: I0117 12:18:16.221059 3501 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:16.221202 kubelet[3501]: I0117 12:18:16.221087 3501 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:16.221283 kubelet[3501]: I0117 12:18:16.221264 3501 state_mem.go:75] "Updated machine memory state" Jan 17 12:18:16.223954 kubelet[3501]: I0117 12:18:16.222941 3501 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:16.224859 kubelet[3501]: I0117 12:18:16.224829 3501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:16.390762 kubelet[3501]: I0117 12:18:16.390655 3501 topology_manager.go:215] "Topology Admit Handler" podUID="cc1f55da61973c9433174a246291ff8e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-9" Jan 17 12:18:16.390915 kubelet[3501]: I0117 12:18:16.390769 3501 topology_manager.go:215] "Topology Admit Handler" podUID="6a49b8d79dd234fb64b3fcf80db7a53a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:16.390915 kubelet[3501]: I0117 12:18:16.390817 3501 topology_manager.go:215] "Topology Admit Handler" podUID="db9f9c7b8f43a332c849ed9b56408273" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-9" Jan 17 12:18:16.407323 kubelet[3501]: E0117 12:18:16.407279 3501 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-9\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-9" Jan 17 12:18:16.462184 kubelet[3501]: I0117 12:18:16.461787 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:16.462184 kubelet[3501]: I0117 12:18:16.461865 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db9f9c7b8f43a332c849ed9b56408273-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-9\" (UID: \"db9f9c7b8f43a332c849ed9b56408273\") " pod="kube-system/kube-scheduler-ip-172-31-23-9" Jan 17 12:18:16.462184 kubelet[3501]: I0117 12:18:16.461901 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-ca-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:16.462184 kubelet[3501]: I0117 12:18:16.461931 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:16.462184 kubelet[3501]: I0117 12:18:16.461964 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:16.462518 kubelet[3501]: I0117 12:18:16.462000 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:16.462518 kubelet[3501]: I0117 12:18:16.462031 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:16.462518 kubelet[3501]: I0117 12:18:16.462064 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1f55da61973c9433174a246291ff8e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"cc1f55da61973c9433174a246291ff8e\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:16.462518 kubelet[3501]: I0117 12:18:16.462094 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a49b8d79dd234fb64b3fcf80db7a53a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"6a49b8d79dd234fb64b3fcf80db7a53a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jan 17 12:18:17.024521 kubelet[3501]: I0117 12:18:17.024468 3501 apiserver.go:52] "Watching apiserver" Jan 17 12:18:17.061030 kubelet[3501]: I0117 12:18:17.060979 3501 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:18:17.218930 kubelet[3501]: E0117 12:18:17.218899 3501 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-9\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-9" Jan 17 12:18:17.246761 kubelet[3501]: I0117 12:18:17.245234 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-9" podStartSLOduration=3.245188563 podStartE2EDuration="3.245188563s" podCreationTimestamp="2025-01-17 12:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:17.245164344 +0000 UTC m=+1.374655850" watchObservedRunningTime="2025-01-17 12:18:17.245188563 +0000 UTC m=+1.374680063" Jan 17 12:18:17.260425 kubelet[3501]: I0117 12:18:17.260083 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-9" podStartSLOduration=1.260012794 podStartE2EDuration="1.260012794s" podCreationTimestamp="2025-01-17 12:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:17.259030605 +0000 UTC m=+1.388522131" watchObservedRunningTime="2025-01-17 12:18:17.260012794 +0000 UTC m=+1.389504294" Jan 17 12:18:17.321292 kubelet[3501]: I0117 12:18:17.321248 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-9" podStartSLOduration=1.321195144 podStartE2EDuration="1.321195144s" podCreationTimestamp="2025-01-17 12:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:17.292745096 +0000 UTC m=+1.422236601" watchObservedRunningTime="2025-01-17 12:18:17.321195144 +0000 UTC m=+1.450686650" Jan 17 12:18:22.270655 sudo[2445]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:22.294237 sshd[2441]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:22.299524 systemd[1]: sshd@6-172.31.23.9:22-139.178.89.65:44216.service: Deactivated successfully. Jan 17 12:18:22.304935 systemd-logind[2059]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:18:22.307678 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:18:22.312213 systemd-logind[2059]: Removed session 7. Jan 17 12:18:28.532467 kubelet[3501]: I0117 12:18:28.532441 3501 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:18:28.533496 containerd[2097]: time="2025-01-17T12:18:28.533001558Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:18:28.535428 kubelet[3501]: I0117 12:18:28.534069 3501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:18:29.057862 kubelet[3501]: I0117 12:18:29.057557 3501 topology_manager.go:215] "Topology Admit Handler" podUID="9c7cc04c-375c-4ea5-9184-f479f91482a0" podNamespace="kube-system" podName="kube-proxy-m8lp8" Jan 17 12:18:29.172272 kubelet[3501]: I0117 12:18:29.172141 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c7cc04c-375c-4ea5-9184-f479f91482a0-kube-proxy\") pod \"kube-proxy-m8lp8\" (UID: \"9c7cc04c-375c-4ea5-9184-f479f91482a0\") " pod="kube-system/kube-proxy-m8lp8" Jan 17 12:18:29.172272 kubelet[3501]: I0117 12:18:29.172194 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c7cc04c-375c-4ea5-9184-f479f91482a0-lib-modules\") pod \"kube-proxy-m8lp8\" (UID: \"9c7cc04c-375c-4ea5-9184-f479f91482a0\") " pod="kube-system/kube-proxy-m8lp8" Jan 17 12:18:29.172272 kubelet[3501]: I0117 12:18:29.172226 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c7cc04c-375c-4ea5-9184-f479f91482a0-xtables-lock\") pod \"kube-proxy-m8lp8\" (UID: \"9c7cc04c-375c-4ea5-9184-f479f91482a0\") " pod="kube-system/kube-proxy-m8lp8" Jan 17 12:18:29.172272 kubelet[3501]: I0117 12:18:29.172259 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrh9\" (UniqueName: \"kubernetes.io/projected/9c7cc04c-375c-4ea5-9184-f479f91482a0-kube-api-access-ztrh9\") pod \"kube-proxy-m8lp8\" (UID: \"9c7cc04c-375c-4ea5-9184-f479f91482a0\") " pod="kube-system/kube-proxy-m8lp8" Jan 17 12:18:29.301625 kubelet[3501]: E0117 12:18:29.301071 3501 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:18:29.301625 kubelet[3501]: E0117 12:18:29.301126 3501 projected.go:200] Error preparing data for projected volume kube-api-access-ztrh9 for pod kube-system/kube-proxy-m8lp8: configmap "kube-root-ca.crt" not found Jan 17 12:18:29.301625 kubelet[3501]: E0117 12:18:29.301331 3501 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c7cc04c-375c-4ea5-9184-f479f91482a0-kube-api-access-ztrh9 podName:9c7cc04c-375c-4ea5-9184-f479f91482a0 nodeName:}" failed. No retries permitted until 2025-01-17 12:18:29.801304629 +0000 UTC m=+13.930796116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ztrh9" (UniqueName: "kubernetes.io/projected/9c7cc04c-375c-4ea5-9184-f479f91482a0-kube-api-access-ztrh9") pod "kube-proxy-m8lp8" (UID: "9c7cc04c-375c-4ea5-9184-f479f91482a0") : configmap "kube-root-ca.crt" not found Jan 17 12:18:29.661635 kubelet[3501]: I0117 12:18:29.661590 3501 topology_manager.go:215] "Topology Admit Handler" podUID="a6d0aa60-230d-4e76-b79f-5066a1c118e6" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-ss6jj" Jan 17 12:18:29.783852 kubelet[3501]: I0117 12:18:29.783798 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/a6d0aa60-230d-4e76-b79f-5066a1c118e6-kube-api-access-7kp2z\") pod \"tigera-operator-c7ccbd65-ss6jj\" (UID: \"a6d0aa60-230d-4e76-b79f-5066a1c118e6\") " pod="tigera-operator/tigera-operator-c7ccbd65-ss6jj" Jan 17 12:18:29.784005 kubelet[3501]: I0117 12:18:29.783863 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6d0aa60-230d-4e76-b79f-5066a1c118e6-var-lib-calico\") pod \"tigera-operator-c7ccbd65-ss6jj\" (UID: \"a6d0aa60-230d-4e76-b79f-5066a1c118e6\") " pod="tigera-operator/tigera-operator-c7ccbd65-ss6jj" Jan 17 12:18:29.975570 containerd[2097]: time="2025-01-17T12:18:29.975444755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8lp8,Uid:9c7cc04c-375c-4ea5-9184-f479f91482a0,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:30.001802 containerd[2097]: time="2025-01-17T12:18:30.001729127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ss6jj,Uid:a6d0aa60-230d-4e76-b79f-5066a1c118e6,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:18:30.020814 containerd[2097]: time="2025-01-17T12:18:30.020509234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:30.020814 containerd[2097]: time="2025-01-17T12:18:30.020623233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:30.020814 containerd[2097]: time="2025-01-17T12:18:30.020661474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:30.026586 containerd[2097]: time="2025-01-17T12:18:30.020793362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:30.067122 containerd[2097]: time="2025-01-17T12:18:30.066407338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:30.067122 containerd[2097]: time="2025-01-17T12:18:30.066494446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:30.067122 containerd[2097]: time="2025-01-17T12:18:30.066521439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:30.067122 containerd[2097]: time="2025-01-17T12:18:30.066647646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:30.118650 containerd[2097]: time="2025-01-17T12:18:30.118607193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8lp8,Uid:9c7cc04c-375c-4ea5-9184-f479f91482a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fea7826d18045c6f3e5696de731d3b53d3db97372d3efa841b0eece1d506e3b\"" Jan 17 12:18:30.130271 containerd[2097]: time="2025-01-17T12:18:30.129958764Z" level=info msg="CreateContainer within sandbox \"2fea7826d18045c6f3e5696de731d3b53d3db97372d3efa841b0eece1d506e3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:18:30.158858 containerd[2097]: time="2025-01-17T12:18:30.158745226Z" level=info msg="CreateContainer within sandbox \"2fea7826d18045c6f3e5696de731d3b53d3db97372d3efa841b0eece1d506e3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34faf737eb55f7ee8da1ca11ba2101de77617ed701452122105c5c443d0dd8c6\"" Jan 17 12:18:30.160480 containerd[2097]: time="2025-01-17T12:18:30.160376876Z" level=info msg="StartContainer for \"34faf737eb55f7ee8da1ca11ba2101de77617ed701452122105c5c443d0dd8c6\"" Jan 17 12:18:30.165004 containerd[2097]: time="2025-01-17T12:18:30.164326726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ss6jj,Uid:a6d0aa60-230d-4e76-b79f-5066a1c118e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"34d9ed94f84f0c23f4ef41b25abece346a7801f8b1c2b61ce76333e42fc6d88b\"" Jan 17 12:18:30.173543 containerd[2097]: time="2025-01-17T12:18:30.172753733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:18:30.233225 containerd[2097]: time="2025-01-17T12:18:30.233132539Z" level=info msg="StartContainer for \"34faf737eb55f7ee8da1ca11ba2101de77617ed701452122105c5c443d0dd8c6\" returns successfully" Jan 17 12:18:34.559754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149167666.mount: Deactivated successfully. Jan 17 12:18:35.462990 containerd[2097]: time="2025-01-17T12:18:35.462938865Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:35.464292 containerd[2097]: time="2025-01-17T12:18:35.464164622Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764305" Jan 17 12:18:35.465597 containerd[2097]: time="2025-01-17T12:18:35.465171285Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:35.476994 containerd[2097]: time="2025-01-17T12:18:35.476931475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:35.477875 containerd[2097]: time="2025-01-17T12:18:35.477654885Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.304734071s" Jan 17 12:18:35.477875 containerd[2097]: time="2025-01-17T12:18:35.477700276Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:18:35.488079 containerd[2097]: time="2025-01-17T12:18:35.487883834Z" level=info msg="CreateContainer within sandbox \"34d9ed94f84f0c23f4ef41b25abece346a7801f8b1c2b61ce76333e42fc6d88b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:18:35.526565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024847362.mount: Deactivated successfully. Jan 17 12:18:35.529400 containerd[2097]: time="2025-01-17T12:18:35.529361128Z" level=info msg="CreateContainer within sandbox \"34d9ed94f84f0c23f4ef41b25abece346a7801f8b1c2b61ce76333e42fc6d88b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6216f7e11596d66272f973690cb4cc764243ef19300196c8fac347830eef0189\"" Jan 17 12:18:35.534370 containerd[2097]: time="2025-01-17T12:18:35.531098083Z" level=info msg="StartContainer for \"6216f7e11596d66272f973690cb4cc764243ef19300196c8fac347830eef0189\"" Jan 17 12:18:35.581929 systemd[1]: run-containerd-runc-k8s.io-6216f7e11596d66272f973690cb4cc764243ef19300196c8fac347830eef0189-runc.i3Hc2F.mount: Deactivated successfully. Jan 17 12:18:35.627450 containerd[2097]: time="2025-01-17T12:18:35.627415534Z" level=info msg="StartContainer for \"6216f7e11596d66272f973690cb4cc764243ef19300196c8fac347830eef0189\" returns successfully" Jan 17 12:18:36.113623 kubelet[3501]: I0117 12:18:36.113575 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m8lp8" podStartSLOduration=7.113537055 podStartE2EDuration="7.113537055s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:31.236176466 +0000 UTC m=+15.365667970" watchObservedRunningTime="2025-01-17 12:18:36.113537055 +0000 UTC m=+20.243028558" Jan 17 12:18:39.285798 kubelet[3501]: I0117 12:18:39.282891 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-ss6jj" podStartSLOduration=4.971005192 podStartE2EDuration="10.282808377s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="2025-01-17 12:18:30.166247553 +0000 UTC m=+14.295739044" lastFinishedPulling="2025-01-17 12:18:35.478050736 +0000 UTC m=+19.607542229" observedRunningTime="2025-01-17 12:18:36.275261629 +0000 UTC m=+20.404753133" watchObservedRunningTime="2025-01-17 12:18:39.282808377 +0000 UTC m=+23.412299877" Jan 17 12:18:39.285798 kubelet[3501]: I0117 12:18:39.283273 3501 topology_manager.go:215] "Topology Admit Handler" podUID="b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0" podNamespace="calico-system" podName="calico-typha-fb5876cfb-wdq5f" Jan 17 12:18:39.464336 kubelet[3501]: I0117 12:18:39.464280 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0-tigera-ca-bundle\") pod \"calico-typha-fb5876cfb-wdq5f\" (UID: \"b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0\") " pod="calico-system/calico-typha-fb5876cfb-wdq5f" Jan 17 12:18:39.464336 kubelet[3501]: I0117 12:18:39.464354 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfpf\" (UniqueName: \"kubernetes.io/projected/b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0-kube-api-access-hvfpf\") pod \"calico-typha-fb5876cfb-wdq5f\" (UID: \"b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0\") " pod="calico-system/calico-typha-fb5876cfb-wdq5f" Jan 17 12:18:39.464566 kubelet[3501]: I0117 12:18:39.464390 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0-typha-certs\") pod \"calico-typha-fb5876cfb-wdq5f\" (UID: \"b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0\") " pod="calico-system/calico-typha-fb5876cfb-wdq5f" Jan 17 12:18:39.520711 kubelet[3501]: I0117 12:18:39.520676 3501 topology_manager.go:215] "Topology Admit Handler" podUID="fc0472eb-1c9e-4c2b-8d1e-e742b15604a1" podNamespace="calico-system" podName="calico-node-795q7" Jan 17 12:18:39.600975 containerd[2097]: time="2025-01-17T12:18:39.599342019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb5876cfb-wdq5f,Uid:b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:39.660146 containerd[2097]: time="2025-01-17T12:18:39.659336094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:39.660146 containerd[2097]: time="2025-01-17T12:18:39.659429261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:39.660146 containerd[2097]: time="2025-01-17T12:18:39.659450507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:39.660146 containerd[2097]: time="2025-01-17T12:18:39.659597832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:39.667997 kubelet[3501]: I0117 12:18:39.667953 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-tigera-ca-bundle\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668113 kubelet[3501]: I0117 12:18:39.668024 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-var-run-calico\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668113 kubelet[3501]: I0117 12:18:39.668061 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-lib-modules\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668113 kubelet[3501]: I0117 12:18:39.668092 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-node-certs\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668336 kubelet[3501]: I0117 12:18:39.668123 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-var-lib-calico\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668336 kubelet[3501]: I0117 12:18:39.668203 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-cni-net-dir\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668336 kubelet[3501]: I0117 12:18:39.668243 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-flexvol-driver-host\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668336 kubelet[3501]: I0117 12:18:39.668279 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-cni-bin-dir\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668336 kubelet[3501]: I0117 12:18:39.668312 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-cni-log-dir\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668542 kubelet[3501]: I0117 12:18:39.668342 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-xtables-lock\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668542 kubelet[3501]: I0117 12:18:39.668375 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-policysync\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.668542 kubelet[3501]: I0117 12:18:39.668409 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcb4n\" (UniqueName: \"kubernetes.io/projected/fc0472eb-1c9e-4c2b-8d1e-e742b15604a1-kube-api-access-bcb4n\") pod \"calico-node-795q7\" (UID: \"fc0472eb-1c9e-4c2b-8d1e-e742b15604a1\") " pod="calico-system/calico-node-795q7" Jan 17 12:18:39.710172 kubelet[3501]: I0117 12:18:39.707539 3501 topology_manager.go:215] "Topology Admit Handler" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" podNamespace="calico-system" podName="csi-node-driver-79gdb" Jan 17 12:18:39.711745 kubelet[3501]: E0117 12:18:39.710908 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:39.769341 kubelet[3501]: I0117 12:18:39.769306 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61c77eba-5156-4cb8-a574-8dbe4d400655-kubelet-dir\") pod \"csi-node-driver-79gdb\" (UID: \"61c77eba-5156-4cb8-a574-8dbe4d400655\") " pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:39.770858 kubelet[3501]: I0117 12:18:39.770210 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61c77eba-5156-4cb8-a574-8dbe4d400655-registration-dir\") pod \"csi-node-driver-79gdb\" (UID: \"61c77eba-5156-4cb8-a574-8dbe4d400655\") " pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:39.770858 kubelet[3501]: I0117 12:18:39.770305 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4r89\" (UniqueName: \"kubernetes.io/projected/61c77eba-5156-4cb8-a574-8dbe4d400655-kube-api-access-k4r89\") pod \"csi-node-driver-79gdb\" (UID: \"61c77eba-5156-4cb8-a574-8dbe4d400655\") " pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:39.770858 kubelet[3501]: I0117 12:18:39.770392 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/61c77eba-5156-4cb8-a574-8dbe4d400655-varrun\") pod \"csi-node-driver-79gdb\" (UID: \"61c77eba-5156-4cb8-a574-8dbe4d400655\") " pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:39.770858 kubelet[3501]: I0117 12:18:39.770449 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61c77eba-5156-4cb8-a574-8dbe4d400655-socket-dir\") pod \"csi-node-driver-79gdb\" (UID: \"61c77eba-5156-4cb8-a574-8dbe4d400655\") " pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:39.790545 kubelet[3501]: E0117 12:18:39.790235 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.790545 kubelet[3501]: W0117 12:18:39.790280 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.790545 kubelet[3501]: E0117 12:18:39.790319 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.794724 kubelet[3501]: E0117 12:18:39.791055 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.794724 kubelet[3501]: W0117 12:18:39.791074 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.794724 kubelet[3501]: E0117 12:18:39.791097 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.831370 kubelet[3501]: E0117 12:18:39.831336 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.831370 kubelet[3501]: W0117 12:18:39.831366 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.831626 kubelet[3501]: E0117 12:18:39.831445 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.866436 containerd[2097]: time="2025-01-17T12:18:39.866297764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb5876cfb-wdq5f,Uid:b3a9ec7e-e8ed-40b8-aa97-07f4178d80a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b43fa37955a522dcc53864bf7712f5139499cbce879c18f5fa66ecb5169db64\"" Jan 17 12:18:39.873186 kubelet[3501]: E0117 12:18:39.872096 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.873331 kubelet[3501]: W0117 12:18:39.873226 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.873331 kubelet[3501]: E0117 12:18:39.873255 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.874539 containerd[2097]: time="2025-01-17T12:18:39.873672246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:18:39.874652 kubelet[3501]: E0117 12:18:39.874229 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.874652 kubelet[3501]: W0117 12:18:39.874242 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.874652 kubelet[3501]: E0117 12:18:39.874399 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.875090 kubelet[3501]: E0117 12:18:39.874888 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.875090 kubelet[3501]: W0117 12:18:39.874901 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.875245 kubelet[3501]: E0117 12:18:39.875108 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.875706 kubelet[3501]: E0117 12:18:39.875681 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.875706 kubelet[3501]: W0117 12:18:39.875697 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.875830 kubelet[3501]: E0117 12:18:39.875816 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.878070 kubelet[3501]: E0117 12:18:39.877728 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.878070 kubelet[3501]: W0117 12:18:39.877744 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.878436 kubelet[3501]: E0117 12:18:39.878415 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.879826 kubelet[3501]: E0117 12:18:39.879269 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.879826 kubelet[3501]: W0117 12:18:39.879320 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.879826 kubelet[3501]: E0117 12:18:39.879390 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.881083 kubelet[3501]: E0117 12:18:39.880428 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.881083 kubelet[3501]: W0117 12:18:39.880441 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.881083 kubelet[3501]: E0117 12:18:39.880504 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.881083 kubelet[3501]: E0117 12:18:39.880778 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.881083 kubelet[3501]: W0117 12:18:39.880788 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.881083 kubelet[3501]: E0117 12:18:39.880889 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.881403 kubelet[3501]: E0117 12:18:39.881235 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.881403 kubelet[3501]: W0117 12:18:39.881246 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.881493 kubelet[3501]: E0117 12:18:39.881435 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.881709 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.883910 kubelet[3501]: W0117 12:18:39.881724 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.881824 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.882122 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.883910 kubelet[3501]: W0117 12:18:39.882132 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.882193 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.882462 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.883910 kubelet[3501]: W0117 12:18:39.882491 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.882578 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.883910 kubelet[3501]: E0117 12:18:39.882820 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.884303 kubelet[3501]: W0117 12:18:39.882865 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.884303 kubelet[3501]: E0117 12:18:39.882930 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.884303 kubelet[3501]: E0117 12:18:39.883151 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.884303 kubelet[3501]: W0117 12:18:39.883160 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.884303 kubelet[3501]: E0117 12:18:39.883256 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.890656 kubelet[3501]: E0117 12:18:39.890489 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.890656 kubelet[3501]: W0117 12:18:39.890513 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.891097 kubelet[3501]: E0117 12:18:39.890941 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.891604 kubelet[3501]: E0117 12:18:39.891452 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.891604 kubelet[3501]: W0117 12:18:39.891469 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.892030 kubelet[3501]: E0117 12:18:39.891766 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.893442 kubelet[3501]: E0117 12:18:39.893427 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.893681 kubelet[3501]: W0117 12:18:39.893664 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.897220 kubelet[3501]: E0117 12:18:39.896885 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.897433 kubelet[3501]: E0117 12:18:39.897368 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.897433 kubelet[3501]: W0117 12:18:39.897380 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.897639 kubelet[3501]: E0117 12:18:39.897555 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.899270 kubelet[3501]: E0117 12:18:39.899036 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.899270 kubelet[3501]: W0117 12:18:39.899052 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.899795 kubelet[3501]: E0117 12:18:39.899678 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.899795 kubelet[3501]: W0117 12:18:39.899693 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.902695 kubelet[3501]: E0117 12:18:39.902574 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.902695 kubelet[3501]: W0117 12:18:39.902590 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.903809 kubelet[3501]: E0117 12:18:39.902914 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.905351 kubelet[3501]: E0117 12:18:39.905336 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.906301 kubelet[3501]: E0117 12:18:39.905487 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.907056 kubelet[3501]: E0117 12:18:39.905576 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.907056 kubelet[3501]: W0117 12:18:39.907256 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.907056 kubelet[3501]: E0117 12:18:39.907283 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.908108 kubelet[3501]: E0117 12:18:39.908066 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.908327 kubelet[3501]: W0117 12:18:39.908191 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.908327 kubelet[3501]: E0117 12:18:39.908448 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.913377 kubelet[3501]: E0117 12:18:39.913184 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.913377 kubelet[3501]: W0117 12:18:39.913202 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.913377 kubelet[3501]: E0117 12:18:39.913251 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.914220 kubelet[3501]: E0117 12:18:39.914062 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.914220 kubelet[3501]: W0117 12:18:39.914078 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.914220 kubelet[3501]: E0117 12:18:39.914175 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:39.915975 kubelet[3501]: E0117 12:18:39.915726 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:39.915975 kubelet[3501]: W0117 12:18:39.915784 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:39.915975 kubelet[3501]: E0117 12:18:39.915805 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:40.155158 containerd[2097]: time="2025-01-17T12:18:40.152467104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-795q7,Uid:fc0472eb-1c9e-4c2b-8d1e-e742b15604a1,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:40.242257 containerd[2097]: time="2025-01-17T12:18:40.242139527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:40.242459 containerd[2097]: time="2025-01-17T12:18:40.242221890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:40.242459 containerd[2097]: time="2025-01-17T12:18:40.242250821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:40.243287 containerd[2097]: time="2025-01-17T12:18:40.242906813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:40.299999 containerd[2097]: time="2025-01-17T12:18:40.299964496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-795q7,Uid:fc0472eb-1c9e-4c2b-8d1e-e742b15604a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\"" Jan 17 12:18:41.084542 kubelet[3501]: E0117 12:18:41.084484 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:41.496324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402798958.mount: Deactivated successfully. Jan 17 12:18:42.482412 containerd[2097]: time="2025-01-17T12:18:42.482280063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:42.483997 containerd[2097]: time="2025-01-17T12:18:42.483902106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:18:42.485117 containerd[2097]: time="2025-01-17T12:18:42.485067719Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:42.491487 containerd[2097]: time="2025-01-17T12:18:42.490409427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:42.492043 containerd[2097]: time="2025-01-17T12:18:42.492006035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.618289635s" Jan 17 12:18:42.492142 containerd[2097]: time="2025-01-17T12:18:42.492051894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:18:42.494431 containerd[2097]: time="2025-01-17T12:18:42.494136874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:18:42.507588 containerd[2097]: time="2025-01-17T12:18:42.507544601Z" level=info msg="CreateContainer within sandbox \"2b43fa37955a522dcc53864bf7712f5139499cbce879c18f5fa66ecb5169db64\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:18:42.531870 containerd[2097]: time="2025-01-17T12:18:42.531348650Z" level=info msg="CreateContainer within sandbox \"2b43fa37955a522dcc53864bf7712f5139499cbce879c18f5fa66ecb5169db64\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2d17e3b03cf215efab396cd43f86f5b568b151e5a5464f49db10fafb5cc8450b\"" Jan 17 12:18:42.532291 containerd[2097]: time="2025-01-17T12:18:42.532260830Z" level=info msg="StartContainer for \"2d17e3b03cf215efab396cd43f86f5b568b151e5a5464f49db10fafb5cc8450b\"" Jan 17 12:18:42.629126 containerd[2097]: time="2025-01-17T12:18:42.629083041Z" level=info msg="StartContainer for \"2d17e3b03cf215efab396cd43f86f5b568b151e5a5464f49db10fafb5cc8450b\" returns successfully" Jan 17 12:18:43.088518 kubelet[3501]: E0117 12:18:43.088461 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:43.404540 kubelet[3501]: E0117 12:18:43.404109 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.404540 kubelet[3501]: W0117 12:18:43.404130 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.404540 kubelet[3501]: E0117 12:18:43.404204 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.404540 kubelet[3501]: E0117 12:18:43.404441 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.404540 kubelet[3501]: W0117 12:18:43.404451 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.404540 kubelet[3501]: E0117 12:18:43.404468 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.405199 kubelet[3501]: E0117 12:18:43.404646 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.405199 kubelet[3501]: W0117 12:18:43.404655 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.405199 kubelet[3501]: E0117 12:18:43.404669 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.405199 kubelet[3501]: E0117 12:18:43.404870 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.405199 kubelet[3501]: W0117 12:18:43.405077 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.405199 kubelet[3501]: E0117 12:18:43.405097 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.405641 kubelet[3501]: E0117 12:18:43.405614 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.405641 kubelet[3501]: W0117 12:18:43.405630 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.405941 kubelet[3501]: E0117 12:18:43.405648 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.406044 kubelet[3501]: E0117 12:18:43.406026 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.406044 kubelet[3501]: W0117 12:18:43.406042 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.406176 kubelet[3501]: E0117 12:18:43.406058 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.406282 kubelet[3501]: E0117 12:18:43.406264 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.406282 kubelet[3501]: W0117 12:18:43.406277 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.406446 kubelet[3501]: E0117 12:18:43.406294 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.406528 kubelet[3501]: E0117 12:18:43.406485 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.406528 kubelet[3501]: W0117 12:18:43.406495 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.406528 kubelet[3501]: E0117 12:18:43.406510 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.406798 kubelet[3501]: E0117 12:18:43.406735 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.406798 kubelet[3501]: W0117 12:18:43.406745 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.406798 kubelet[3501]: E0117 12:18:43.406761 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.407122 kubelet[3501]: E0117 12:18:43.407108 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.407122 kubelet[3501]: W0117 12:18:43.407123 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.407379 kubelet[3501]: E0117 12:18:43.407138 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.407566 kubelet[3501]: E0117 12:18:43.407548 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.407566 kubelet[3501]: W0117 12:18:43.407561 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.407668 kubelet[3501]: E0117 12:18:43.407578 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.407894 kubelet[3501]: E0117 12:18:43.407851 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.407894 kubelet[3501]: W0117 12:18:43.407863 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.407894 kubelet[3501]: E0117 12:18:43.407877 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.408445 kubelet[3501]: E0117 12:18:43.408104 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.408445 kubelet[3501]: W0117 12:18:43.408115 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.408445 kubelet[3501]: E0117 12:18:43.408205 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.408599 kubelet[3501]: E0117 12:18:43.408568 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.408599 kubelet[3501]: W0117 12:18:43.408578 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.408599 kubelet[3501]: E0117 12:18:43.408595 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.408831 kubelet[3501]: E0117 12:18:43.408815 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.409136 kubelet[3501]: W0117 12:18:43.408830 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.409136 kubelet[3501]: E0117 12:18:43.408873 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.418241 kubelet[3501]: E0117 12:18:43.418215 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.418241 kubelet[3501]: W0117 12:18:43.418235 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.418542 kubelet[3501]: E0117 12:18:43.418258 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.418542 kubelet[3501]: E0117 12:18:43.418527 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.418542 kubelet[3501]: W0117 12:18:43.418537 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.418678 kubelet[3501]: E0117 12:18:43.418558 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.419024 kubelet[3501]: E0117 12:18:43.419004 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.419095 kubelet[3501]: W0117 12:18:43.419028 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.419095 kubelet[3501]: E0117 12:18:43.419053 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.419336 kubelet[3501]: E0117 12:18:43.419319 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.419336 kubelet[3501]: W0117 12:18:43.419333 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.419573 kubelet[3501]: E0117 12:18:43.419354 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.419573 kubelet[3501]: E0117 12:18:43.419569 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.419742 kubelet[3501]: W0117 12:18:43.419580 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.419742 kubelet[3501]: E0117 12:18:43.419685 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.419870 kubelet[3501]: E0117 12:18:43.419828 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.419870 kubelet[3501]: W0117 12:18:43.419856 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.419966 kubelet[3501]: E0117 12:18:43.419955 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.420144 kubelet[3501]: E0117 12:18:43.420127 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.420144 kubelet[3501]: W0117 12:18:43.420141 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.420252 kubelet[3501]: E0117 12:18:43.420229 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.420414 kubelet[3501]: E0117 12:18:43.420396 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.420414 kubelet[3501]: W0117 12:18:43.420410 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.420602 kubelet[3501]: E0117 12:18:43.420517 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.420787 kubelet[3501]: E0117 12:18:43.420771 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.420787 kubelet[3501]: W0117 12:18:43.420784 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.421168 kubelet[3501]: E0117 12:18:43.421109 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.421462 kubelet[3501]: E0117 12:18:43.421351 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.421462 kubelet[3501]: W0117 12:18:43.421365 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.421462 kubelet[3501]: E0117 12:18:43.421387 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.421753 kubelet[3501]: E0117 12:18:43.421736 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.421753 kubelet[3501]: W0117 12:18:43.421749 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.421926 kubelet[3501]: E0117 12:18:43.421778 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.422080 kubelet[3501]: E0117 12:18:43.422063 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.422080 kubelet[3501]: W0117 12:18:43.422076 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.422181 kubelet[3501]: E0117 12:18:43.422097 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.422515 kubelet[3501]: E0117 12:18:43.422497 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.422515 kubelet[3501]: W0117 12:18:43.422511 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.422625 kubelet[3501]: E0117 12:18:43.422534 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.422792 kubelet[3501]: E0117 12:18:43.422776 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.422792 kubelet[3501]: W0117 12:18:43.422789 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.422917 kubelet[3501]: E0117 12:18:43.422815 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.423095 kubelet[3501]: E0117 12:18:43.423080 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.423095 kubelet[3501]: W0117 12:18:43.423093 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.423472 kubelet[3501]: E0117 12:18:43.423321 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.423944 kubelet[3501]: E0117 12:18:43.423830 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.423944 kubelet[3501]: W0117 12:18:43.423869 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.423944 kubelet[3501]: E0117 12:18:43.424002 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.425294 kubelet[3501]: E0117 12:18:43.424688 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.425294 kubelet[3501]: W0117 12:18:43.424701 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.425294 kubelet[3501]: E0117 12:18:43.424718 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:43.425294 kubelet[3501]: E0117 12:18:43.425140 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:43.425294 kubelet[3501]: W0117 12:18:43.425151 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:43.425294 kubelet[3501]: E0117 12:18:43.425178 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.315487 kubelet[3501]: I0117 12:18:44.315457 3501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:44.323389 kubelet[3501]: E0117 12:18:44.323255 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.323389 kubelet[3501]: W0117 12:18:44.323281 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.323925 kubelet[3501]: E0117 12:18:44.323625 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.324324 kubelet[3501]: E0117 12:18:44.324136 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.324324 kubelet[3501]: W0117 12:18:44.324152 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.324324 kubelet[3501]: E0117 12:18:44.324173 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.325076 kubelet[3501]: E0117 12:18:44.324867 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.325076 kubelet[3501]: W0117 12:18:44.324880 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.325076 kubelet[3501]: E0117 12:18:44.324900 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.327524 kubelet[3501]: E0117 12:18:44.325425 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.327524 kubelet[3501]: W0117 12:18:44.325439 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.327524 kubelet[3501]: E0117 12:18:44.325522 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.328602 kubelet[3501]: E0117 12:18:44.328244 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.328602 kubelet[3501]: W0117 12:18:44.328275 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.328602 kubelet[3501]: E0117 12:18:44.328295 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.329600 kubelet[3501]: E0117 12:18:44.328913 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.329600 kubelet[3501]: W0117 12:18:44.328926 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.329600 kubelet[3501]: E0117 12:18:44.328943 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.331402 kubelet[3501]: E0117 12:18:44.330150 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.331402 kubelet[3501]: W0117 12:18:44.330161 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.331402 kubelet[3501]: E0117 12:18:44.330294 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.333747 kubelet[3501]: E0117 12:18:44.332555 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.333747 kubelet[3501]: W0117 12:18:44.332573 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.333747 kubelet[3501]: E0117 12:18:44.332671 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.334377 kubelet[3501]: E0117 12:18:44.334041 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.334377 kubelet[3501]: W0117 12:18:44.334138 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.334377 kubelet[3501]: E0117 12:18:44.334162 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.334687 kubelet[3501]: E0117 12:18:44.334447 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.334687 kubelet[3501]: W0117 12:18:44.334458 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.334687 kubelet[3501]: E0117 12:18:44.334475 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.335623 kubelet[3501]: E0117 12:18:44.335329 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.335623 kubelet[3501]: W0117 12:18:44.335343 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.335623 kubelet[3501]: E0117 12:18:44.335369 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.335623 kubelet[3501]: E0117 12:18:44.335577 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.335623 kubelet[3501]: W0117 12:18:44.335586 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.335623 kubelet[3501]: E0117 12:18:44.335601 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.336647 kubelet[3501]: E0117 12:18:44.336325 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.336647 kubelet[3501]: W0117 12:18:44.336339 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.336647 kubelet[3501]: E0117 12:18:44.336356 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.336647 kubelet[3501]: E0117 12:18:44.336562 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.336647 kubelet[3501]: W0117 12:18:44.336571 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.336647 kubelet[3501]: E0117 12:18:44.336586 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.337926 kubelet[3501]: E0117 12:18:44.337209 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.337926 kubelet[3501]: W0117 12:18:44.337231 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.337926 kubelet[3501]: E0117 12:18:44.337248 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.337926 kubelet[3501]: E0117 12:18:44.337749 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.337926 kubelet[3501]: W0117 12:18:44.337763 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.337926 kubelet[3501]: E0117 12:18:44.337792 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.338891 kubelet[3501]: E0117 12:18:44.338492 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.338891 kubelet[3501]: W0117 12:18:44.338506 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.338891 kubelet[3501]: E0117 12:18:44.338595 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.339655 kubelet[3501]: E0117 12:18:44.339042 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.339655 kubelet[3501]: W0117 12:18:44.339053 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.339655 kubelet[3501]: E0117 12:18:44.339074 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.340350 kubelet[3501]: E0117 12:18:44.340032 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.340350 kubelet[3501]: W0117 12:18:44.340047 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.340350 kubelet[3501]: E0117 12:18:44.340241 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.341035 kubelet[3501]: E0117 12:18:44.340714 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.341035 kubelet[3501]: W0117 12:18:44.340744 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.341035 kubelet[3501]: E0117 12:18:44.340768 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.341483 kubelet[3501]: E0117 12:18:44.341318 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.341483 kubelet[3501]: W0117 12:18:44.341331 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.341483 kubelet[3501]: E0117 12:18:44.341397 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.342050 kubelet[3501]: E0117 12:18:44.342036 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.342253 kubelet[3501]: W0117 12:18:44.342156 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.342253 kubelet[3501]: E0117 12:18:44.342230 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.342658 kubelet[3501]: E0117 12:18:44.342575 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.342658 kubelet[3501]: W0117 12:18:44.342588 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.343153 kubelet[3501]: E0117 12:18:44.342965 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.343313 kubelet[3501]: E0117 12:18:44.343279 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.343605 kubelet[3501]: W0117 12:18:44.343293 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.343605 kubelet[3501]: E0117 12:18:44.343544 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.344123 kubelet[3501]: E0117 12:18:44.343973 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.344123 kubelet[3501]: W0117 12:18:44.344006 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.344443 kubelet[3501]: E0117 12:18:44.344281 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.344927 kubelet[3501]: E0117 12:18:44.344547 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.344927 kubelet[3501]: W0117 12:18:44.344560 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.344927 kubelet[3501]: E0117 12:18:44.344668 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.345347 kubelet[3501]: E0117 12:18:44.345300 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.345347 kubelet[3501]: W0117 12:18:44.345314 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.345873 kubelet[3501]: E0117 12:18:44.345616 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.346142 kubelet[3501]: E0117 12:18:44.346130 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.346407 kubelet[3501]: W0117 12:18:44.346231 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.346407 kubelet[3501]: E0117 12:18:44.346267 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.346868 kubelet[3501]: E0117 12:18:44.346717 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.346868 kubelet[3501]: W0117 12:18:44.346730 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.347203 kubelet[3501]: E0117 12:18:44.346931 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.347753 kubelet[3501]: E0117 12:18:44.347736 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.347753 kubelet[3501]: W0117 12:18:44.347753 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.347907 kubelet[3501]: E0117 12:18:44.347775 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.349349 kubelet[3501]: E0117 12:18:44.349020 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.349349 kubelet[3501]: W0117 12:18:44.349034 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.349349 kubelet[3501]: E0117 12:18:44.349055 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.350052 kubelet[3501]: E0117 12:18:44.350037 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.350052 kubelet[3501]: W0117 12:18:44.350053 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.350368 kubelet[3501]: E0117 12:18:44.350357 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.350808 kubelet[3501]: E0117 12:18:44.350726 3501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:44.350808 kubelet[3501]: W0117 12:18:44.350741 3501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:44.350808 kubelet[3501]: E0117 12:18:44.350760 3501 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:44.365378 containerd[2097]: time="2025-01-17T12:18:44.365315800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:44.367391 containerd[2097]: time="2025-01-17T12:18:44.367328100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:18:44.368870 containerd[2097]: time="2025-01-17T12:18:44.368791048Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:44.372067 containerd[2097]: time="2025-01-17T12:18:44.372028041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:44.373113 containerd[2097]: time="2025-01-17T12:18:44.372962274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.878780096s" Jan 17 12:18:44.373113 containerd[2097]: time="2025-01-17T12:18:44.373005432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:18:44.377074 containerd[2097]: time="2025-01-17T12:18:44.376964696Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:18:44.404524 containerd[2097]: time="2025-01-17T12:18:44.404476219Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79\"" Jan 17 12:18:44.405192 containerd[2097]: time="2025-01-17T12:18:44.405113650Z" level=info msg="StartContainer for \"2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79\"" Jan 17 12:18:44.529925 containerd[2097]: time="2025-01-17T12:18:44.529416808Z" level=info msg="StartContainer for \"2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79\" returns successfully" Jan 17 12:18:44.581408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79-rootfs.mount: Deactivated successfully. Jan 17 12:18:44.626379 containerd[2097]: time="2025-01-17T12:18:44.597572954Z" level=info msg="shim disconnected" id=2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79 namespace=k8s.io Jan 17 12:18:44.626621 containerd[2097]: time="2025-01-17T12:18:44.626383118Z" level=warning msg="cleaning up after shim disconnected" id=2fb30d2823874397cb96e8679b5a5c4b4a68c52d968fec8531945fdde46a1a79 namespace=k8s.io Jan 17 12:18:44.626621 containerd[2097]: time="2025-01-17T12:18:44.626403039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:44.657007 containerd[2097]: time="2025-01-17T12:18:44.656890089Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:18:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:18:45.085129 kubelet[3501]: E0117 12:18:45.085071 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:45.320424 containerd[2097]: time="2025-01-17T12:18:45.320358198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:18:45.343731 kubelet[3501]: I0117 12:18:45.342169 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-fb5876cfb-wdq5f" podStartSLOduration=3.721593017 podStartE2EDuration="6.342116856s" podCreationTimestamp="2025-01-17 12:18:39 +0000 UTC" firstStartedPulling="2025-01-17 12:18:39.871956689 +0000 UTC m=+24.001448187" lastFinishedPulling="2025-01-17 12:18:42.49248054 +0000 UTC m=+26.621972026" observedRunningTime="2025-01-17 12:18:43.346451904 +0000 UTC m=+27.475943421" watchObservedRunningTime="2025-01-17 12:18:45.342116856 +0000 UTC m=+29.471608360" Jan 17 12:18:47.085031 kubelet[3501]: E0117 12:18:47.084992 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:49.085475 kubelet[3501]: E0117 12:18:49.085439 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:50.679297 containerd[2097]: time="2025-01-17T12:18:50.679247644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.681512 containerd[2097]: time="2025-01-17T12:18:50.681464163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:18:50.683794 containerd[2097]: time="2025-01-17T12:18:50.683446926Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.687210 containerd[2097]: time="2025-01-17T12:18:50.687165981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:50.688199 containerd[2097]: time="2025-01-17T12:18:50.688164676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.367748835s" Jan 17 12:18:50.688290 containerd[2097]: time="2025-01-17T12:18:50.688206795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:18:50.693177 containerd[2097]: time="2025-01-17T12:18:50.693049363Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:18:50.869417 containerd[2097]: time="2025-01-17T12:18:50.869378453Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781\"" Jan 17 12:18:50.870411 containerd[2097]: time="2025-01-17T12:18:50.870377588Z" level=info msg="StartContainer for \"fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781\"" Jan 17 12:18:51.015334 containerd[2097]: time="2025-01-17T12:18:51.015050796Z" level=info msg="StartContainer for \"fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781\" returns successfully" Jan 17 12:18:51.084437 kubelet[3501]: E0117 12:18:51.084398 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:51.977606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781-rootfs.mount: Deactivated successfully. Jan 17 12:18:52.012069 containerd[2097]: time="2025-01-17T12:18:51.988550201Z" level=info msg="shim disconnected" id=fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781 namespace=k8s.io Jan 17 12:18:52.012069 containerd[2097]: time="2025-01-17T12:18:51.988722718Z" level=warning msg="cleaning up after shim disconnected" id=fb8e587a0cdeade2ca420f104fa6306f1a9453e41d42c8b1baf99460b3e6d781 namespace=k8s.io Jan 17 12:18:52.012069 containerd[2097]: time="2025-01-17T12:18:51.988737110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:52.036223 kubelet[3501]: I0117 12:18:52.036194 3501 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:18:52.118456 kubelet[3501]: I0117 12:18:52.117029 3501 topology_manager.go:215] "Topology Admit Handler" podUID="6b19cc1c-714d-4ba2-a8d1-0de091969729" podNamespace="kube-system" podName="coredns-76f75df574-skvd5" Jan 17 12:18:52.127136 kubelet[3501]: I0117 12:18:52.127086 3501 topology_manager.go:215] "Topology Admit Handler" podUID="20bd129c-9dbc-47e8-a882-de12365029b7" podNamespace="calico-apiserver" podName="calico-apiserver-8675f558fd-s9wh8" Jan 17 12:18:52.128762 kubelet[3501]: I0117 12:18:52.128715 3501 topology_manager.go:215] "Topology Admit Handler" podUID="2f5958e1-c100-4633-9f03-22bc32367a23" podNamespace="calico-apiserver" podName="calico-apiserver-8675f558fd-2mmb2" Jan 17 12:18:52.141028 kubelet[3501]: I0117 12:18:52.140558 3501 topology_manager.go:215] "Topology Admit Handler" podUID="149e3208-6b3e-4a46-b0fa-9024d88c37c0" podNamespace="kube-system" podName="coredns-76f75df574-42cdl" Jan 17 12:18:52.141028 kubelet[3501]: I0117 12:18:52.140801 3501 topology_manager.go:215] "Topology Admit Handler" podUID="ebf0bd96-fd27-4fed-9e45-06b22eb36a4a" podNamespace="calico-system" podName="calico-kube-controllers-9656dd96d-gsvkg" Jan 17 12:18:52.149408 kubelet[3501]: W0117 12:18:52.149227 3501 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-9' and this object Jan 17 12:18:52.149408 kubelet[3501]: E0117 12:18:52.149343 3501 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-9' and this object Jan 17 12:18:52.149408 kubelet[3501]: W0117 12:18:52.149398 3501 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-9' and this object Jan 17 12:18:52.149408 kubelet[3501]: E0117 12:18:52.149412 3501 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-9' and this object Jan 17 12:18:52.236966 kubelet[3501]: I0117 12:18:52.236094 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f5958e1-c100-4633-9f03-22bc32367a23-calico-apiserver-certs\") pod \"calico-apiserver-8675f558fd-2mmb2\" (UID: \"2f5958e1-c100-4633-9f03-22bc32367a23\") " pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" Jan 17 12:18:52.237452 kubelet[3501]: I0117 12:18:52.237342 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20bd129c-9dbc-47e8-a882-de12365029b7-calico-apiserver-certs\") pod \"calico-apiserver-8675f558fd-s9wh8\" (UID: \"20bd129c-9dbc-47e8-a882-de12365029b7\") " pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" Jan 17 12:18:52.238063 kubelet[3501]: I0117 12:18:52.237685 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvt24\" (UniqueName: \"kubernetes.io/projected/2f5958e1-c100-4633-9f03-22bc32367a23-kube-api-access-kvt24\") pod \"calico-apiserver-8675f558fd-2mmb2\" (UID: \"2f5958e1-c100-4633-9f03-22bc32367a23\") " pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" Jan 17 12:18:52.260966 kubelet[3501]: I0117 12:18:52.238257 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgvt\" (UniqueName: \"kubernetes.io/projected/20bd129c-9dbc-47e8-a882-de12365029b7-kube-api-access-wwgvt\") pod \"calico-apiserver-8675f558fd-s9wh8\" (UID: \"20bd129c-9dbc-47e8-a882-de12365029b7\") " pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" Jan 17 12:18:52.260966 kubelet[3501]: I0117 12:18:52.238450 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8nh\" (UniqueName: \"kubernetes.io/projected/6b19cc1c-714d-4ba2-a8d1-0de091969729-kube-api-access-md8nh\") pod \"coredns-76f75df574-skvd5\" (UID: \"6b19cc1c-714d-4ba2-a8d1-0de091969729\") " pod="kube-system/coredns-76f75df574-skvd5" Jan 17 12:18:52.260966 kubelet[3501]: I0117 12:18:52.238786 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b19cc1c-714d-4ba2-a8d1-0de091969729-config-volume\") pod \"coredns-76f75df574-skvd5\" (UID: \"6b19cc1c-714d-4ba2-a8d1-0de091969729\") " pod="kube-system/coredns-76f75df574-skvd5" Jan 17 12:18:52.340142 kubelet[3501]: I0117 12:18:52.340092 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zg5\" (UniqueName: \"kubernetes.io/projected/149e3208-6b3e-4a46-b0fa-9024d88c37c0-kube-api-access-64zg5\") pod \"coredns-76f75df574-42cdl\" (UID: \"149e3208-6b3e-4a46-b0fa-9024d88c37c0\") " pod="kube-system/coredns-76f75df574-42cdl" Jan 17 12:18:52.340305 kubelet[3501]: I0117 12:18:52.340200 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmhq\" (UniqueName: \"kubernetes.io/projected/ebf0bd96-fd27-4fed-9e45-06b22eb36a4a-kube-api-access-jsmhq\") pod \"calico-kube-controllers-9656dd96d-gsvkg\" (UID: \"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a\") " pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" Jan 17 12:18:52.340305 kubelet[3501]: I0117 12:18:52.340250 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf0bd96-fd27-4fed-9e45-06b22eb36a4a-tigera-ca-bundle\") pod \"calico-kube-controllers-9656dd96d-gsvkg\" (UID: \"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a\") " pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" Jan 17 12:18:52.340414 kubelet[3501]: I0117 12:18:52.340344 3501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/149e3208-6b3e-4a46-b0fa-9024d88c37c0-config-volume\") pod \"coredns-76f75df574-42cdl\" (UID: \"149e3208-6b3e-4a46-b0fa-9024d88c37c0\") " pod="kube-system/coredns-76f75df574-42cdl" Jan 17 12:18:52.366358 containerd[2097]: time="2025-01-17T12:18:52.366109257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:18:52.424261 containerd[2097]: time="2025-01-17T12:18:52.424159420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-skvd5,Uid:6b19cc1c-714d-4ba2-a8d1-0de091969729,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:52.753757 containerd[2097]: time="2025-01-17T12:18:52.753703419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42cdl,Uid:149e3208-6b3e-4a46-b0fa-9024d88c37c0,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:52.759428 containerd[2097]: time="2025-01-17T12:18:52.759376475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9656dd96d-gsvkg,Uid:ebf0bd96-fd27-4fed-9e45-06b22eb36a4a,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:53.088738 containerd[2097]: time="2025-01-17T12:18:53.088076785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79gdb,Uid:61c77eba-5156-4cb8-a574-8dbe4d400655,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:53.341651 containerd[2097]: time="2025-01-17T12:18:53.341529971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-2mmb2,Uid:2f5958e1-c100-4633-9f03-22bc32367a23,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:18:53.350873 containerd[2097]: time="2025-01-17T12:18:53.350800423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-s9wh8,Uid:20bd129c-9dbc-47e8-a882-de12365029b7,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:18:53.623039 kubelet[3501]: I0117 12:18:53.621008 3501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:55.080995 containerd[2097]: time="2025-01-17T12:18:55.080910398Z" level=error msg="Failed to destroy network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.090248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08-shm.mount: Deactivated successfully. Jan 17 12:18:55.099705 containerd[2097]: time="2025-01-17T12:18:55.099611024Z" level=error msg="encountered an error cleaning up failed sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.112941 containerd[2097]: time="2025-01-17T12:18:55.112882196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-s9wh8,Uid:20bd129c-9dbc-47e8-a882-de12365029b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.134273 kubelet[3501]: E0117 12:18:55.134195 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.135065 kubelet[3501]: E0117 12:18:55.134307 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" Jan 17 12:18:55.135065 kubelet[3501]: E0117 12:18:55.134505 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" Jan 17 12:18:55.135065 kubelet[3501]: E0117 12:18:55.134603 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8675f558fd-s9wh8_calico-apiserver(20bd129c-9dbc-47e8-a882-de12365029b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8675f558fd-s9wh8_calico-apiserver(20bd129c-9dbc-47e8-a882-de12365029b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" podUID="20bd129c-9dbc-47e8-a882-de12365029b7" Jan 17 12:18:55.152010 containerd[2097]: time="2025-01-17T12:18:55.151873358Z" level=error msg="Failed to destroy network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.155316 containerd[2097]: time="2025-01-17T12:18:55.154797518Z" level=error msg="encountered an error cleaning up failed sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.157015 containerd[2097]: time="2025-01-17T12:18:55.156869396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79gdb,Uid:61c77eba-5156-4cb8-a574-8dbe4d400655,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.158305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41-shm.mount: Deactivated successfully. Jan 17 12:18:55.162705 kubelet[3501]: E0117 12:18:55.162672 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.162827 kubelet[3501]: E0117 12:18:55.162748 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:55.162827 kubelet[3501]: E0117 12:18:55.162781 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79gdb" Jan 17 12:18:55.162996 kubelet[3501]: E0117 12:18:55.162875 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79gdb_calico-system(61c77eba-5156-4cb8-a574-8dbe4d400655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79gdb_calico-system(61c77eba-5156-4cb8-a574-8dbe4d400655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:55.184238 containerd[2097]: time="2025-01-17T12:18:55.184189859Z" level=error msg="Failed to destroy network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.187875 containerd[2097]: time="2025-01-17T12:18:55.186332605Z" level=error msg="encountered an error cleaning up failed sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.194268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296-shm.mount: Deactivated successfully. Jan 17 12:18:55.198283 containerd[2097]: time="2025-01-17T12:18:55.198233670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-skvd5,Uid:6b19cc1c-714d-4ba2-a8d1-0de091969729,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.200546 kubelet[3501]: E0117 12:18:55.200514 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.200670 kubelet[3501]: E0117 12:18:55.200583 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-skvd5" Jan 17 12:18:55.200670 kubelet[3501]: E0117 12:18:55.200610 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-skvd5" Jan 17 12:18:55.200760 kubelet[3501]: E0117 12:18:55.200671 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-skvd5_kube-system(6b19cc1c-714d-4ba2-a8d1-0de091969729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-skvd5_kube-system(6b19cc1c-714d-4ba2-a8d1-0de091969729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-skvd5" podUID="6b19cc1c-714d-4ba2-a8d1-0de091969729" Jan 17 12:18:55.204229 containerd[2097]: time="2025-01-17T12:18:55.204183973Z" level=error msg="Failed to destroy network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.209075 containerd[2097]: time="2025-01-17T12:18:55.204795289Z" level=error msg="encountered an error cleaning up failed sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.209075 containerd[2097]: time="2025-01-17T12:18:55.208943830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9656dd96d-gsvkg,Uid:ebf0bd96-fd27-4fed-9e45-06b22eb36a4a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.209294 kubelet[3501]: E0117 12:18:55.209198 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.209294 kubelet[3501]: E0117 12:18:55.209261 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" Jan 17 12:18:55.209294 kubelet[3501]: E0117 12:18:55.209292 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" Jan 17 12:18:55.209448 kubelet[3501]: E0117 12:18:55.209359 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9656dd96d-gsvkg_calico-system(ebf0bd96-fd27-4fed-9e45-06b22eb36a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9656dd96d-gsvkg_calico-system(ebf0bd96-fd27-4fed-9e45-06b22eb36a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" podUID="ebf0bd96-fd27-4fed-9e45-06b22eb36a4a" Jan 17 12:18:55.215186 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367-shm.mount: Deactivated successfully. Jan 17 12:18:55.220328 containerd[2097]: time="2025-01-17T12:18:55.219966262Z" level=error msg="Failed to destroy network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.223005 containerd[2097]: time="2025-01-17T12:18:55.222704844Z" level=error msg="encountered an error cleaning up failed sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.223005 containerd[2097]: time="2025-01-17T12:18:55.222800696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-2mmb2,Uid:2f5958e1-c100-4633-9f03-22bc32367a23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.224040 kubelet[3501]: E0117 12:18:55.223875 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.225286 kubelet[3501]: E0117 12:18:55.224258 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" Jan 17 12:18:55.225286 kubelet[3501]: E0117 12:18:55.224310 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" Jan 17 12:18:55.228360 kubelet[3501]: E0117 12:18:55.227902 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8675f558fd-2mmb2_calico-apiserver(2f5958e1-c100-4633-9f03-22bc32367a23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8675f558fd-2mmb2_calico-apiserver(2f5958e1-c100-4633-9f03-22bc32367a23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" podUID="2f5958e1-c100-4633-9f03-22bc32367a23" Jan 17 12:18:55.238596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9-shm.mount: Deactivated successfully. Jan 17 12:18:55.241052 containerd[2097]: time="2025-01-17T12:18:55.241009520Z" level=error msg="Failed to destroy network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.241576 containerd[2097]: time="2025-01-17T12:18:55.241540206Z" level=error msg="encountered an error cleaning up failed sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.241736 containerd[2097]: time="2025-01-17T12:18:55.241708048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42cdl,Uid:149e3208-6b3e-4a46-b0fa-9024d88c37c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.242511 kubelet[3501]: E0117 12:18:55.242487 3501 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.242766 kubelet[3501]: E0117 12:18:55.242754 3501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-42cdl" Jan 17 12:18:55.242914 kubelet[3501]: E0117 12:18:55.242905 3501 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-42cdl" Jan 17 12:18:55.243139 kubelet[3501]: E0117 12:18:55.243127 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-42cdl_kube-system(149e3208-6b3e-4a46-b0fa-9024d88c37c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-42cdl_kube-system(149e3208-6b3e-4a46-b0fa-9024d88c37c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-42cdl" podUID="149e3208-6b3e-4a46-b0fa-9024d88c37c0" Jan 17 12:18:55.383014 kubelet[3501]: I0117 12:18:55.382914 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:18:55.395277 kubelet[3501]: I0117 12:18:55.395181 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:18:55.429877 kubelet[3501]: I0117 12:18:55.429220 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:18:55.436794 containerd[2097]: time="2025-01-17T12:18:55.436449887Z" level=info msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" Jan 17 12:18:55.438865 containerd[2097]: time="2025-01-17T12:18:55.437673955Z" level=info msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" Jan 17 12:18:55.438865 containerd[2097]: time="2025-01-17T12:18:55.438709421Z" level=info msg="Ensure that sandbox 1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08 in task-service has been cleanup successfully" Jan 17 12:18:55.446153 containerd[2097]: time="2025-01-17T12:18:55.446100272Z" level=info msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" Jan 17 12:18:55.446337 containerd[2097]: time="2025-01-17T12:18:55.438712633Z" level=info msg="Ensure that sandbox 9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296 in task-service has been cleanup successfully" Jan 17 12:18:55.447055 containerd[2097]: time="2025-01-17T12:18:55.447018635Z" level=info msg="Ensure that sandbox 57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41 in task-service has been cleanup successfully" Jan 17 12:18:55.476621 kubelet[3501]: I0117 12:18:55.476593 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:18:55.527995 containerd[2097]: time="2025-01-17T12:18:55.527950464Z" level=info msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" Jan 17 12:18:55.528546 containerd[2097]: time="2025-01-17T12:18:55.528514937Z" level=info msg="Ensure that sandbox d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9 in task-service has been cleanup successfully" Jan 17 12:18:55.564914 kubelet[3501]: I0117 12:18:55.564785 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:18:55.568273 containerd[2097]: time="2025-01-17T12:18:55.567579460Z" level=info msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" Jan 17 12:18:55.568273 containerd[2097]: time="2025-01-17T12:18:55.567819074Z" level=info msg="Ensure that sandbox 7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367 in task-service has been cleanup successfully" Jan 17 12:18:55.581595 kubelet[3501]: I0117 12:18:55.581565 3501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:18:55.583544 containerd[2097]: time="2025-01-17T12:18:55.583503541Z" level=info msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" Jan 17 12:18:55.583937 containerd[2097]: time="2025-01-17T12:18:55.583721225Z" level=info msg="Ensure that sandbox e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14 in task-service has been cleanup successfully" Jan 17 12:18:55.681274 containerd[2097]: time="2025-01-17T12:18:55.681042107Z" level=error msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" failed" error="failed to destroy network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.685618 kubelet[3501]: E0117 12:18:55.685585 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:18:55.693965 kubelet[3501]: E0117 12:18:55.693921 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296"} Jan 17 12:18:55.694936 kubelet[3501]: E0117 12:18:55.694002 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b19cc1c-714d-4ba2-a8d1-0de091969729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.694936 kubelet[3501]: E0117 12:18:55.694042 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b19cc1c-714d-4ba2-a8d1-0de091969729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-skvd5" podUID="6b19cc1c-714d-4ba2-a8d1-0de091969729" Jan 17 12:18:55.714331 containerd[2097]: time="2025-01-17T12:18:55.714160273Z" level=error msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" failed" error="failed to destroy network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.714826 kubelet[3501]: E0117 12:18:55.714784 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:18:55.714959 kubelet[3501]: E0117 12:18:55.714851 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08"} Jan 17 12:18:55.714959 kubelet[3501]: E0117 12:18:55.714899 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20bd129c-9dbc-47e8-a882-de12365029b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.714959 kubelet[3501]: E0117 12:18:55.714937 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20bd129c-9dbc-47e8-a882-de12365029b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" podUID="20bd129c-9dbc-47e8-a882-de12365029b7" Jan 17 12:18:55.717811 containerd[2097]: time="2025-01-17T12:18:55.717324962Z" level=error msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" failed" error="failed to destroy network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.718142 kubelet[3501]: E0117 12:18:55.717718 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:18:55.718142 kubelet[3501]: E0117 12:18:55.717770 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41"} Jan 17 12:18:55.718142 kubelet[3501]: E0117 12:18:55.717849 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61c77eba-5156-4cb8-a574-8dbe4d400655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.718142 kubelet[3501]: E0117 12:18:55.717894 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61c77eba-5156-4cb8-a574-8dbe4d400655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79gdb" podUID="61c77eba-5156-4cb8-a574-8dbe4d400655" Jan 17 12:18:55.746260 containerd[2097]: time="2025-01-17T12:18:55.745471140Z" level=error msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" failed" error="failed to destroy network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.746503 kubelet[3501]: E0117 12:18:55.745985 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:18:55.746503 kubelet[3501]: E0117 12:18:55.746063 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9"} Jan 17 12:18:55.746503 kubelet[3501]: E0117 12:18:55.746210 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f5958e1-c100-4633-9f03-22bc32367a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.746503 kubelet[3501]: E0117 12:18:55.746417 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f5958e1-c100-4633-9f03-22bc32367a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" podUID="2f5958e1-c100-4633-9f03-22bc32367a23" Jan 17 12:18:55.751571 containerd[2097]: time="2025-01-17T12:18:55.751075474Z" level=error msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" failed" error="failed to destroy network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.751726 kubelet[3501]: E0117 12:18:55.751386 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:18:55.751726 kubelet[3501]: E0117 12:18:55.751431 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367"} Jan 17 12:18:55.751726 kubelet[3501]: E0117 12:18:55.751484 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.751726 kubelet[3501]: E0117 12:18:55.751529 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" podUID="ebf0bd96-fd27-4fed-9e45-06b22eb36a4a" Jan 17 12:18:55.759302 containerd[2097]: time="2025-01-17T12:18:55.759250553Z" level=error msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" failed" error="failed to destroy network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:18:55.759593 kubelet[3501]: E0117 12:18:55.759567 3501 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:18:55.759687 kubelet[3501]: E0117 12:18:55.759677 3501 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14"} Jan 17 12:18:55.759811 kubelet[3501]: E0117 12:18:55.759758 3501 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"149e3208-6b3e-4a46-b0fa-9024d88c37c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:18:55.759954 kubelet[3501]: E0117 12:18:55.759882 3501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"149e3208-6b3e-4a46-b0fa-9024d88c37c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-42cdl" podUID="149e3208-6b3e-4a46-b0fa-9024d88c37c0" Jan 17 12:18:56.105392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14-shm.mount: Deactivated successfully. Jan 17 12:18:56.123912 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:18:56.122763 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:18:56.122873 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:00.144082 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:00.144111 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:00.145882 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:01.192400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093747738.mount: Deactivated successfully. Jan 17 12:19:01.478882 containerd[2097]: time="2025-01-17T12:19:01.466420066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:19:01.616225 containerd[2097]: time="2025-01-17T12:19:01.613365385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:01.850753 containerd[2097]: time="2025-01-17T12:19:01.850693519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.184380209s" Jan 17 12:19:01.850753 containerd[2097]: time="2025-01-17T12:19:01.850750647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:19:01.856382 containerd[2097]: time="2025-01-17T12:19:01.855487804Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:01.860451 containerd[2097]: time="2025-01-17T12:19:01.860355337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:02.191923 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:02.211512 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:02.191935 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:02.506403 containerd[2097]: time="2025-01-17T12:19:02.506261810Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:19:02.706571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453942319.mount: Deactivated successfully. Jan 17 12:19:02.748217 containerd[2097]: time="2025-01-17T12:19:02.748156517Z" level=info msg="CreateContainer within sandbox \"7ba9e043b4f4acfb44aa0dea503e8833158f184acd596f8822183b1a6efc9a51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9e931e8c71cfa31e6da26c1bba475ab4ae832fb4dcee6008c1f4fa0b39b604a\"" Jan 17 12:19:02.758539 containerd[2097]: time="2025-01-17T12:19:02.758236097Z" level=info msg="StartContainer for \"b9e931e8c71cfa31e6da26c1bba475ab4ae832fb4dcee6008c1f4fa0b39b604a\"" Jan 17 12:19:03.213366 containerd[2097]: time="2025-01-17T12:19:03.212341725Z" level=info msg="StartContainer for \"b9e931e8c71cfa31e6da26c1bba475ab4ae832fb4dcee6008c1f4fa0b39b604a\" returns successfully" Jan 17 12:19:03.469869 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:19:03.470512 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:19:04.240082 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:04.240092 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:04.241874 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:04.755603 kubelet[3501]: I0117 12:19:04.755547 3501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:05.473582 systemd[1]: Started sshd@7-172.31.23.9:22-139.178.89.65:58118.service - OpenSSH per-connection server daemon (139.178.89.65:58118). Jan 17 12:19:05.802539 sshd[4701]: Accepted publickey for core from 139.178.89.65 port 58118 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:05.806092 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:05.834949 systemd-logind[2059]: New session 8 of user core. Jan 17 12:19:05.840589 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:19:06.066214 kubelet[3501]: I0117 12:19:06.065991 3501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:06.292931 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:06.287937 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:06.287946 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:06.372039 sshd[4701]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:06.381264 systemd[1]: sshd@7-172.31.23.9:22-139.178.89.65:58118.service: Deactivated successfully. Jan 17 12:19:06.399686 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:19:06.403833 systemd-logind[2059]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:19:06.408115 systemd-logind[2059]: Removed session 8. Jan 17 12:19:06.594905 kernel: bpftool[4783]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:19:06.945233 systemd-networkd[1647]: vxlan.calico: Link UP Jan 17 12:19:06.947170 systemd-networkd[1647]: vxlan.calico: Gained carrier Jan 17 12:19:06.953224 (udev-worker)[4823]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:06.993555 (udev-worker)[4833]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:07.086213 containerd[2097]: time="2025-01-17T12:19:07.086163602Z" level=info msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" Jan 17 12:19:07.397294 kubelet[3501]: I0117 12:19:07.397243 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-795q7" podStartSLOduration=6.677448348 podStartE2EDuration="28.368242235s" podCreationTimestamp="2025-01-17 12:18:39 +0000 UTC" firstStartedPulling="2025-01-17 12:18:40.30197355 +0000 UTC m=+24.431465034" lastFinishedPulling="2025-01-17 12:19:01.992767431 +0000 UTC m=+46.122258921" observedRunningTime="2025-01-17 12:19:03.900665024 +0000 UTC m=+48.030156528" watchObservedRunningTime="2025-01-17 12:19:07.368242235 +0000 UTC m=+51.497733766" Jan 17 12:19:08.091039 containerd[2097]: time="2025-01-17T12:19:08.087956876Z" level=info msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" Jan 17 12:19:08.091737 containerd[2097]: time="2025-01-17T12:19:08.087956871Z" level=info msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" Jan 17 12:19:08.093866 containerd[2097]: time="2025-01-17T12:19:08.093777831Z" level=info msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" Jan 17 12:19:08.466289 systemd-networkd[1647]: vxlan.calico: Gained IPv6LL Jan 17 12:19:09.084766 containerd[2097]: time="2025-01-17T12:19:09.084725149Z" level=info msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.169 [INFO][4980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.169 [INFO][4980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" iface="eth0" netns="/var/run/netns/cni-a3c0fec8-8037-7a8a-f024-0e70f95a2953" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.170 [INFO][4980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" iface="eth0" netns="/var/run/netns/cni-a3c0fec8-8037-7a8a-f024-0e70f95a2953" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.170 [INFO][4980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" iface="eth0" netns="/var/run/netns/cni-a3c0fec8-8037-7a8a-f024-0e70f95a2953" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.170 [INFO][4980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.170 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.620 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.622 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.622 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.633 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.633 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.636 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:09.642908 containerd[2097]: 2025-01-17 12:19:09.639 [INFO][4980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:09.651284 containerd[2097]: time="2025-01-17T12:19:09.645726045Z" level=info msg="TearDown network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" successfully" Jan 17 12:19:09.651284 containerd[2097]: time="2025-01-17T12:19:09.645763269Z" level=info msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" returns successfully" Jan 17 12:19:09.652787 systemd[1]: run-netns-cni\x2da3c0fec8\x2d8037\x2d7a8a\x2df024\x2d0e70f95a2953.mount: Deactivated successfully. Jan 17 12:19:09.660607 containerd[2097]: time="2025-01-17T12:19:09.660564280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79gdb,Uid:61c77eba-5156-4cb8-a574-8dbe4d400655,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.370 [INFO][4851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.371 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" iface="eth0" netns="/var/run/netns/cni-320a106a-26ac-9f86-4bde-b7a61d8b840b" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.373 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" iface="eth0" netns="/var/run/netns/cni-320a106a-26ac-9f86-4bde-b7a61d8b840b" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.377 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" iface="eth0" netns="/var/run/netns/cni-320a106a-26ac-9f86-4bde-b7a61d8b840b" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.377 [INFO][4851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:07.377 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.620 [INFO][4890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.622 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.636 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.646 [WARNING][4890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.646 [INFO][4890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.655 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:09.670775 containerd[2097]: 2025-01-17 12:19:09.667 [INFO][4851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:09.676728 containerd[2097]: time="2025-01-17T12:19:09.674259335Z" level=info msg="TearDown network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" successfully" Jan 17 12:19:09.676728 containerd[2097]: time="2025-01-17T12:19:09.674492629Z" level=info msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" returns successfully" Jan 17 12:19:09.677654 systemd[1]: run-netns-cni\x2d320a106a\x2d26ac\x2d9f86\x2d4bde\x2db7a61d8b840b.mount: Deactivated successfully. Jan 17 12:19:09.679141 containerd[2097]: time="2025-01-17T12:19:09.678830841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42cdl,Uid:149e3208-6b3e-4a46-b0fa-9024d88c37c0,Namespace:kube-system,Attempt:1,}" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.270 [INFO][4934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.273 [INFO][4934] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" iface="eth0" netns="/var/run/netns/cni-08504c64-b3dc-bd40-5747-f78456a75966" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.273 [INFO][4934] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" iface="eth0" netns="/var/run/netns/cni-08504c64-b3dc-bd40-5747-f78456a75966" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.276 [INFO][4934] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" iface="eth0" netns="/var/run/netns/cni-08504c64-b3dc-bd40-5747-f78456a75966" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.277 [INFO][4934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:08.277 [INFO][4934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.620 [INFO][4953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.622 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.655 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.669 [WARNING][4953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.669 [INFO][4953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.677 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:09.701036 containerd[2097]: 2025-01-17 12:19:09.692 [INFO][4934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:09.713591 containerd[2097]: time="2025-01-17T12:19:09.707939325Z" level=info msg="TearDown network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" successfully" Jan 17 12:19:09.713591 containerd[2097]: time="2025-01-17T12:19:09.707982623Z" level=info msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" returns successfully" Jan 17 12:19:09.712292 systemd[1]: run-netns-cni\x2d08504c64\x2db3dc\x2dbd40\x2d5747\x2df78456a75966.mount: Deactivated successfully. Jan 17 12:19:09.721656 containerd[2097]: time="2025-01-17T12:19:09.718019818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9656dd96d-gsvkg,Uid:ebf0bd96-fd27-4fed-9e45-06b22eb36a4a,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.308 [INFO][4936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.309 [INFO][4936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" iface="eth0" netns="/var/run/netns/cni-eda60a71-622e-8107-445f-1320073726de" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.310 [INFO][4936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" iface="eth0" netns="/var/run/netns/cni-eda60a71-622e-8107-445f-1320073726de" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.310 [INFO][4936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" iface="eth0" netns="/var/run/netns/cni-eda60a71-622e-8107-445f-1320073726de" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.311 [INFO][4936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:08.311 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.620 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.622 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.678 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.697 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.697 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.703 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:09.773473 containerd[2097]: 2025-01-17 12:19:09.733 [INFO][4936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:09.774572 containerd[2097]: time="2025-01-17T12:19:09.774046673Z" level=info msg="TearDown network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" successfully" Jan 17 12:19:09.774572 containerd[2097]: time="2025-01-17T12:19:09.774079289Z" level=info msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" returns successfully" Jan 17 12:19:09.775714 containerd[2097]: time="2025-01-17T12:19:09.775432245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-2mmb2,Uid:2f5958e1-c100-4633-9f03-22bc32367a23,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.301 [INFO][4935] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.303 [INFO][4935] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" iface="eth0" netns="/var/run/netns/cni-6c7cd4fc-2cbc-1249-6197-5fde0c4607e2" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.303 [INFO][4935] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" iface="eth0" netns="/var/run/netns/cni-6c7cd4fc-2cbc-1249-6197-5fde0c4607e2" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.304 [INFO][4935] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" iface="eth0" netns="/var/run/netns/cni-6c7cd4fc-2cbc-1249-6197-5fde0c4607e2" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.304 [INFO][4935] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:08.304 [INFO][4935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.620 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.623 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.704 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.739 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.741 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.765 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:09.796127 containerd[2097]: 2025-01-17 12:19:09.779 [INFO][4935] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:09.799127 containerd[2097]: time="2025-01-17T12:19:09.797018159Z" level=info msg="TearDown network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" successfully" Jan 17 12:19:09.799127 containerd[2097]: time="2025-01-17T12:19:09.797068987Z" level=info msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" returns successfully" Jan 17 12:19:09.802826 containerd[2097]: time="2025-01-17T12:19:09.802786187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-skvd5,Uid:6b19cc1c-714d-4ba2-a8d1-0de091969729,Namespace:kube-system,Attempt:1,}" Jan 17 12:19:10.378373 systemd-networkd[1647]: cali7b72a81373e: Link UP Jan 17 12:19:10.383110 systemd-networkd[1647]: cali7b72a81373e: Gained carrier Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:09.932 [INFO][5008] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0 coredns-76f75df574- kube-system 149e3208-6b3e-4a46-b0fa-9024d88c37c0 801 0 2025-01-17 12:18:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-9 coredns-76f75df574-42cdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b72a81373e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:09.939 [INFO][5008] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.152 [INFO][5058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" HandleID="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.185 [INFO][5058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" HandleID="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-9", "pod":"coredns-76f75df574-42cdl", "timestamp":"2025-01-17 12:19:10.152884989 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.185 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.185 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.185 [INFO][5058] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.201 [INFO][5058] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.254 [INFO][5058] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.273 [INFO][5058] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.279 [INFO][5058] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.287 [INFO][5058] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.287 [INFO][5058] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.291 [INFO][5058] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.302 [INFO][5058] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.319 [INFO][5058] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.193/26] block=192.168.79.192/26 handle="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.320 [INFO][5058] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.193/26] handle="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" host="ip-172-31-23-9" Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.321 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:10.447937 containerd[2097]: 2025-01-17 12:19:10.321 [INFO][5058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.193/26] IPv6=[] ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" HandleID="k8s-pod-network.3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.334 [INFO][5008] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149e3208-6b3e-4a46-b0fa-9024d88c37c0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"coredns-76f75df574-42cdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b72a81373e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.335 [INFO][5008] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.193/32] ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.335 [INFO][5008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b72a81373e ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.385 [INFO][5008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.386 [INFO][5008] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149e3208-6b3e-4a46-b0fa-9024d88c37c0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde", Pod:"coredns-76f75df574-42cdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b72a81373e", MAC:"4a:8f:64:1b:f5:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.450235 containerd[2097]: 2025-01-17 12:19:10.435 [INFO][5008] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde" Namespace="kube-system" Pod="coredns-76f75df574-42cdl" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:10.525851 systemd-networkd[1647]: calia7e5cafe35f: Link UP Jan 17 12:19:10.527115 systemd-networkd[1647]: calia7e5cafe35f: Gained carrier Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:09.954 [INFO][5003] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0 csi-node-driver- calico-system 61c77eba-5156-4cb8-a574-8dbe4d400655 816 0 2025-01-17 12:18:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-9 csi-node-driver-79gdb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia7e5cafe35f [] []}} ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:09.957 [INFO][5003] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.230 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" HandleID="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.270 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" HandleID="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-9", "pod":"csi-node-driver-79gdb", "timestamp":"2025-01-17 12:19:10.23037554 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.270 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.323 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.323 [INFO][5063] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.328 [INFO][5063] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.361 [INFO][5063] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.420 [INFO][5063] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.430 [INFO][5063] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.445 [INFO][5063] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.446 [INFO][5063] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.451 [INFO][5063] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371 Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.460 [INFO][5063] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.477 [INFO][5063] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.194/26] block=192.168.79.192/26 handle="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.479 [INFO][5063] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.194/26] handle="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" host="ip-172-31-23-9" Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.480 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:10.587594 containerd[2097]: 2025-01-17 12:19:10.480 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.194/26] IPv6=[] ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" HandleID="k8s-pod-network.9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.499 [INFO][5003] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61c77eba-5156-4cb8-a574-8dbe4d400655", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"csi-node-driver-79gdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7e5cafe35f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.501 [INFO][5003] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.194/32] ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.501 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7e5cafe35f ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.527 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.535 [INFO][5003] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61c77eba-5156-4cb8-a574-8dbe4d400655", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371", Pod:"csi-node-driver-79gdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7e5cafe35f", MAC:"6e:17:dc:f9:d4:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.588573 containerd[2097]: 2025-01-17 12:19:10.577 [INFO][5003] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371" Namespace="calico-system" Pod="csi-node-driver-79gdb" WorkloadEndpoint="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:10.591207 containerd[2097]: time="2025-01-17T12:19:10.587455809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:10.591207 containerd[2097]: time="2025-01-17T12:19:10.589582730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:10.591207 containerd[2097]: time="2025-01-17T12:19:10.589615882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:10.591832 containerd[2097]: time="2025-01-17T12:19:10.590440605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:10.694774 systemd[1]: run-netns-cni\x2deda60a71\x2d622e\x2d8107\x2d445f\x2d1320073726de.mount: Deactivated successfully. Jan 17 12:19:10.694997 systemd[1]: run-netns-cni\x2d6c7cd4fc\x2d2cbc\x2d1249\x2d6197\x2d5fde0c4607e2.mount: Deactivated successfully. Jan 17 12:19:10.719102 systemd-networkd[1647]: cali5f0f80c1e0f: Link UP Jan 17 12:19:10.728414 systemd-networkd[1647]: cali5f0f80c1e0f: Gained carrier Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.013 [INFO][5022] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0 calico-kube-controllers-9656dd96d- calico-system ebf0bd96-fd27-4fed-9e45-06b22eb36a4a 807 0 2025-01-17 12:18:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9656dd96d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-9 calico-kube-controllers-9656dd96d-gsvkg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5f0f80c1e0f [] []}} ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.016 [INFO][5022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.307 [INFO][5071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" HandleID="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.370 [INFO][5071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" HandleID="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-9", "pod":"calico-kube-controllers-9656dd96d-gsvkg", "timestamp":"2025-01-17 12:19:10.307637896 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.379 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.481 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.481 [INFO][5071] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.486 [INFO][5071] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.507 [INFO][5071] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.550 [INFO][5071] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.578 [INFO][5071] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.600 [INFO][5071] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.601 [INFO][5071] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.605 [INFO][5071] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6 Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.618 [INFO][5071] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.630 [INFO][5071] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.195/26] block=192.168.79.192/26 handle="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.631 [INFO][5071] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.195/26] handle="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" host="ip-172-31-23-9" Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.633 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:10.844902 containerd[2097]: 2025-01-17 12:19:10.633 [INFO][5071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.195/26] IPv6=[] ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" HandleID="k8s-pod-network.20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.644 [INFO][5022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0", GenerateName:"calico-kube-controllers-9656dd96d-", Namespace:"calico-system", SelfLink:"", UID:"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9656dd96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"calico-kube-controllers-9656dd96d-gsvkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f0f80c1e0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.644 [INFO][5022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.195/32] ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.644 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f0f80c1e0f ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.732 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.756 [INFO][5022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0", GenerateName:"calico-kube-controllers-9656dd96d-", Namespace:"calico-system", SelfLink:"", UID:"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9656dd96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6", Pod:"calico-kube-controllers-9656dd96d-gsvkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f0f80c1e0f", MAC:"76:41:d6:33:e1:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:10.847667 containerd[2097]: 2025-01-17 12:19:10.826 [INFO][5022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6" Namespace="calico-system" Pod="calico-kube-controllers-9656dd96d-gsvkg" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:10.891864 containerd[2097]: time="2025-01-17T12:19:10.890768203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:10.893006 containerd[2097]: time="2025-01-17T12:19:10.892620654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:10.893006 containerd[2097]: time="2025-01-17T12:19:10.892668579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:10.893006 containerd[2097]: time="2025-01-17T12:19:10.892800208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.016442 systemd-networkd[1647]: calidb1c9f67b64: Link UP Jan 17 12:19:11.018511 systemd-networkd[1647]: calidb1c9f67b64: Gained carrier Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.158 [INFO][5042] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0 coredns-76f75df574- kube-system 6b19cc1c-714d-4ba2-a8d1-0de091969729 808 0 2025-01-17 12:18:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-9 coredns-76f75df574-skvd5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb1c9f67b64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.161 [INFO][5042] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.415 [INFO][5076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" HandleID="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.446 [INFO][5076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" HandleID="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000301a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-9", "pod":"coredns-76f75df574-skvd5", "timestamp":"2025-01-17 12:19:10.415610267 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.446 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.634 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.636 [INFO][5076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.666 [INFO][5076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.722 [INFO][5076] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.773 [INFO][5076] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.787 [INFO][5076] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.817 [INFO][5076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.820 [INFO][5076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.838 [INFO][5076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.869 [INFO][5076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.902 [INFO][5076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.196/26] block=192.168.79.192/26 handle="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.904 [INFO][5076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.196/26] handle="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" host="ip-172-31-23-9" Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.904 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:11.074619 containerd[2097]: 2025-01-17 12:19:10.904 [INFO][5076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.196/26] IPv6=[] ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" HandleID="k8s-pod-network.de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:10.952 [INFO][5042] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b19cc1c-714d-4ba2-a8d1-0de091969729", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"coredns-76f75df574-skvd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb1c9f67b64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:10.953 [INFO][5042] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.196/32] ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:10.953 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb1c9f67b64 ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:11.019 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:11.023 [INFO][5042] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b19cc1c-714d-4ba2-a8d1-0de091969729", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b", Pod:"coredns-76f75df574-skvd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb1c9f67b64", MAC:"4a:99:f3:7b:80:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:11.076799 containerd[2097]: 2025-01-17 12:19:11.048 [INFO][5042] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b" Namespace="kube-system" Pod="coredns-76f75df574-skvd5" WorkloadEndpoint="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:11.113264 containerd[2097]: time="2025-01-17T12:19:11.113200389Z" level=info msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" Jan 17 12:19:11.137154 containerd[2097]: time="2025-01-17T12:19:11.136494112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-42cdl,Uid:149e3208-6b3e-4a46-b0fa-9024d88c37c0,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde\"" Jan 17 12:19:11.180911 systemd-networkd[1647]: cali75caa82e358: Link UP Jan 17 12:19:11.181195 systemd-networkd[1647]: cali75caa82e358: Gained carrier Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.161 [INFO][5032] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0 calico-apiserver-8675f558fd- calico-apiserver 2f5958e1-c100-4633-9f03-22bc32367a23 809 0 2025-01-17 12:18:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8675f558fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-9 calico-apiserver-8675f558fd-2mmb2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali75caa82e358 [] []}} ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.162 [INFO][5032] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.457 [INFO][5080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" HandleID="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.485 [INFO][5080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" HandleID="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-9", "pod":"calico-apiserver-8675f558fd-2mmb2", "timestamp":"2025-01-17 12:19:10.457551662 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.486 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.904 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.918 [INFO][5080] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:10.943 [INFO][5080] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.007 [INFO][5080] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.067 [INFO][5080] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.081 [INFO][5080] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.099 [INFO][5080] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.099 [INFO][5080] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.112 [INFO][5080] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67 Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.124 [INFO][5080] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.149 [INFO][5080] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.197/26] block=192.168.79.192/26 handle="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.150 [INFO][5080] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.197/26] handle="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" host="ip-172-31-23-9" Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.150 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:11.263305 containerd[2097]: 2025-01-17 12:19:11.150 [INFO][5080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.197/26] IPv6=[] ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" HandleID="k8s-pod-network.62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.176 [INFO][5032] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5958e1-c100-4633-9f03-22bc32367a23", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"calico-apiserver-8675f558fd-2mmb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75caa82e358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.177 [INFO][5032] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.197/32] ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.177 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75caa82e358 ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.180 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.180 [INFO][5032] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5958e1-c100-4633-9f03-22bc32367a23", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67", Pod:"calico-apiserver-8675f558fd-2mmb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75caa82e358", MAC:"96:f9:08:28:98:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:11.266704 containerd[2097]: 2025-01-17 12:19:11.216 [INFO][5032] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-2mmb2" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:11.266704 containerd[2097]: time="2025-01-17T12:19:11.172428018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:11.266704 containerd[2097]: time="2025-01-17T12:19:11.172502561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:11.266704 containerd[2097]: time="2025-01-17T12:19:11.172541153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.266704 containerd[2097]: time="2025-01-17T12:19:11.172770552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.272439 containerd[2097]: time="2025-01-17T12:19:11.271225495Z" level=info msg="CreateContainer within sandbox \"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:11.418340 systemd[1]: Started sshd@8-172.31.23.9:22-139.178.89.65:55308.service - OpenSSH per-connection server daemon (139.178.89.65:55308). Jan 17 12:19:11.448317 containerd[2097]: time="2025-01-17T12:19:11.448279483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79gdb,Uid:61c77eba-5156-4cb8-a574-8dbe4d400655,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371\"" Jan 17 12:19:11.450069 containerd[2097]: time="2025-01-17T12:19:11.450031531Z" level=info msg="CreateContainer within sandbox \"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dbc0fb468c1637f9819acfb849414260f40bcc43200fb13c458b984ccd55082\"" Jan 17 12:19:11.451810 containerd[2097]: time="2025-01-17T12:19:11.451780589Z" level=info msg="StartContainer for \"3dbc0fb468c1637f9819acfb849414260f40bcc43200fb13c458b984ccd55082\"" Jan 17 12:19:11.457636 containerd[2097]: time="2025-01-17T12:19:11.454478588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.701430778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.701517336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.701535712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.701732644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.702356164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.702420675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.702437489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.703928 containerd[2097]: time="2025-01-17T12:19:11.702541258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:11.762336 sshd[5295]: Accepted publickey for core from 139.178.89.65 port 55308 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:11.770584 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:11.837104 systemd-logind[2059]: New session 9 of user core. Jan 17 12:19:11.843657 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:19:11.857014 systemd-networkd[1647]: calia7e5cafe35f: Gained IPv6LL Jan 17 12:19:11.891436 containerd[2097]: time="2025-01-17T12:19:11.890006165Z" level=info msg="StartContainer for \"3dbc0fb468c1637f9819acfb849414260f40bcc43200fb13c458b984ccd55082\" returns successfully" Jan 17 12:19:11.943331 containerd[2097]: time="2025-01-17T12:19:11.942281147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9656dd96d-gsvkg,Uid:ebf0bd96-fd27-4fed-9e45-06b22eb36a4a,Namespace:calico-system,Attempt:1,} returns sandbox id \"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6\"" Jan 17 12:19:11.984122 systemd-networkd[1647]: cali5f0f80c1e0f: Gained IPv6LL Jan 17 12:19:12.121789 containerd[2097]: time="2025-01-17T12:19:12.120929467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-skvd5,Uid:6b19cc1c-714d-4ba2-a8d1-0de091969729,Namespace:kube-system,Attempt:1,} returns sandbox id \"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b\"" Jan 17 12:19:12.131913 containerd[2097]: time="2025-01-17T12:19:12.131803323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-2mmb2,Uid:2f5958e1-c100-4633-9f03-22bc32367a23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67\"" Jan 17 12:19:12.152059 containerd[2097]: time="2025-01-17T12:19:12.150342960Z" level=info msg="CreateContainer within sandbox \"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.875 [INFO][5258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.878 [INFO][5258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" iface="eth0" netns="/var/run/netns/cni-775901a6-152c-017a-4332-516477fd25de" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.881 [INFO][5258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" iface="eth0" netns="/var/run/netns/cni-775901a6-152c-017a-4332-516477fd25de" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.881 [INFO][5258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" iface="eth0" netns="/var/run/netns/cni-775901a6-152c-017a-4332-516477fd25de" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.888 [INFO][5258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:11.888 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.172 [INFO][5388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.174 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.175 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.203 [WARNING][5388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.205 [INFO][5388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.207 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:12.214280 containerd[2097]: 2025-01-17 12:19:12.211 [INFO][5258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:12.214280 containerd[2097]: time="2025-01-17T12:19:12.213957607Z" level=info msg="TearDown network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" successfully" Jan 17 12:19:12.214280 containerd[2097]: time="2025-01-17T12:19:12.214003722Z" level=info msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" returns successfully" Jan 17 12:19:12.217000 containerd[2097]: time="2025-01-17T12:19:12.216527570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-s9wh8,Uid:20bd129c-9dbc-47e8-a882-de12365029b7,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:19:12.266541 containerd[2097]: time="2025-01-17T12:19:12.266416220Z" level=info msg="CreateContainer within sandbox \"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65e80598535b135929e6e102f8064a56cea5a1c6af400c2a51b54df76bf845b7\"" Jan 17 12:19:12.272045 containerd[2097]: time="2025-01-17T12:19:12.272005192Z" level=info msg="StartContainer for \"65e80598535b135929e6e102f8064a56cea5a1c6af400c2a51b54df76bf845b7\"" Jan 17 12:19:12.437536 systemd-networkd[1647]: cali7b72a81373e: Gained IPv6LL Jan 17 12:19:12.583104 containerd[2097]: time="2025-01-17T12:19:12.583053013Z" level=info msg="StartContainer for \"65e80598535b135929e6e102f8064a56cea5a1c6af400c2a51b54df76bf845b7\" returns successfully" Jan 17 12:19:12.673858 systemd[1]: run-netns-cni\x2d775901a6\x2d152c\x2d017a\x2d4332\x2d516477fd25de.mount: Deactivated successfully. Jan 17 12:19:12.688391 systemd-networkd[1647]: cali75caa82e358: Gained IPv6LL Jan 17 12:19:12.715579 systemd-networkd[1647]: cali8138e51da00: Link UP Jan 17 12:19:12.716580 sshd[5295]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:12.721920 systemd-networkd[1647]: cali8138e51da00: Gained carrier Jan 17 12:19:12.729409 systemd[1]: sshd@8-172.31.23.9:22-139.178.89.65:55308.service: Deactivated successfully. Jan 17 12:19:12.732397 systemd-logind[2059]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:19:12.742555 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:19:12.760377 systemd-logind[2059]: Removed session 9. Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.418 [INFO][5433] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0 calico-apiserver-8675f558fd- calico-apiserver 20bd129c-9dbc-47e8-a882-de12365029b7 850 0 2025-01-17 12:18:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8675f558fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-9 calico-apiserver-8675f558fd-s9wh8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8138e51da00 [] []}} ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.420 [INFO][5433] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.610 [INFO][5478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" HandleID="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.632 [INFO][5478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" HandleID="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ba70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-9", "pod":"calico-apiserver-8675f558fd-s9wh8", "timestamp":"2025-01-17 12:19:12.61013749 +0000 UTC"}, Hostname:"ip-172-31-23-9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.632 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.632 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.632 [INFO][5478] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-9' Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.638 [INFO][5478] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.650 [INFO][5478] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.663 [INFO][5478] ipam/ipam.go 489: Trying affinity for 192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.667 [INFO][5478] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.672 [INFO][5478] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.672 [INFO][5478] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.675 [INFO][5478] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.691 [INFO][5478] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.704 [INFO][5478] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.198/26] block=192.168.79.192/26 handle="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.704 [INFO][5478] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.198/26] handle="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" host="ip-172-31-23-9" Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.704 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:12.774924 containerd[2097]: 2025-01-17 12:19:12.705 [INFO][5478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.198/26] IPv6=[] ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" HandleID="k8s-pod-network.fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.708 [INFO][5433] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"20bd129c-9dbc-47e8-a882-de12365029b7", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"", Pod:"calico-apiserver-8675f558fd-s9wh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8138e51da00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.709 [INFO][5433] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.198/32] ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.709 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8138e51da00 ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.720 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.721 [INFO][5433] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"20bd129c-9dbc-47e8-a882-de12365029b7", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c", Pod:"calico-apiserver-8675f558fd-s9wh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8138e51da00", MAC:"56:68:5b:0f:fc:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:12.784200 containerd[2097]: 2025-01-17 12:19:12.756 [INFO][5433] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c" Namespace="calico-apiserver" Pod="calico-apiserver-8675f558fd-s9wh8" WorkloadEndpoint="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:12.834388 containerd[2097]: time="2025-01-17T12:19:12.834197195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:12.834388 containerd[2097]: time="2025-01-17T12:19:12.834256026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:12.834388 containerd[2097]: time="2025-01-17T12:19:12.834271863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:12.835061 containerd[2097]: time="2025-01-17T12:19:12.834377846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:12.926938 kubelet[3501]: I0117 12:19:12.926063 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-skvd5" podStartSLOduration=43.926007805 podStartE2EDuration="43.926007805s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:12.913248774 +0000 UTC m=+57.042740279" watchObservedRunningTime="2025-01-17 12:19:12.926007805 +0000 UTC m=+57.055499311" Jan 17 12:19:12.946047 systemd-networkd[1647]: calidb1c9f67b64: Gained IPv6LL Jan 17 12:19:13.044005 containerd[2097]: time="2025-01-17T12:19:13.043953594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8675f558fd-s9wh8,Uid:20bd129c-9dbc-47e8-a882-de12365029b7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c\"" Jan 17 12:19:13.802692 containerd[2097]: time="2025-01-17T12:19:13.802561117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.804210 containerd[2097]: time="2025-01-17T12:19:13.804053516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:19:13.809408 containerd[2097]: time="2025-01-17T12:19:13.807077453Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.813730 containerd[2097]: time="2025-01-17T12:19:13.813651905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.815704 containerd[2097]: time="2025-01-17T12:19:13.815391101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.357865465s" Jan 17 12:19:13.815704 containerd[2097]: time="2025-01-17T12:19:13.815465935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:19:13.817024 containerd[2097]: time="2025-01-17T12:19:13.816980256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:19:13.827034 containerd[2097]: time="2025-01-17T12:19:13.826990489Z" level=info msg="CreateContainer within sandbox \"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:19:13.856364 containerd[2097]: time="2025-01-17T12:19:13.856318303Z" level=info msg="CreateContainer within sandbox \"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5315a8cd7911d3c8972a25039863487bce3a7149c556d5bc8606a09900d0eda3\"" Jan 17 12:19:13.859421 containerd[2097]: time="2025-01-17T12:19:13.858719732Z" level=info msg="StartContainer for \"5315a8cd7911d3c8972a25039863487bce3a7149c556d5bc8606a09900d0eda3\"" Jan 17 12:19:13.926956 systemd[1]: run-containerd-runc-k8s.io-5315a8cd7911d3c8972a25039863487bce3a7149c556d5bc8606a09900d0eda3-runc.iRsp9W.mount: Deactivated successfully. Jan 17 12:19:13.985504 containerd[2097]: time="2025-01-17T12:19:13.985461565Z" level=info msg="StartContainer for \"5315a8cd7911d3c8972a25039863487bce3a7149c556d5bc8606a09900d0eda3\" returns successfully" Jan 17 12:19:14.417155 systemd-networkd[1647]: cali8138e51da00: Gained IPv6LL Jan 17 12:19:16.141199 containerd[2097]: time="2025-01-17T12:19:16.141162203Z" level=info msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.277 [WARNING][5618] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0", GenerateName:"calico-kube-controllers-9656dd96d-", Namespace:"calico-system", SelfLink:"", UID:"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9656dd96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6", Pod:"calico-kube-controllers-9656dd96d-gsvkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f0f80c1e0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.277 [INFO][5618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.277 [INFO][5618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" iface="eth0" netns="" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.277 [INFO][5618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.277 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.330 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.330 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.330 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.344 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.345 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.348 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:16.356667 containerd[2097]: 2025-01-17 12:19:16.352 [INFO][5618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.356667 containerd[2097]: time="2025-01-17T12:19:16.356330608Z" level=info msg="TearDown network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" successfully" Jan 17 12:19:16.356667 containerd[2097]: time="2025-01-17T12:19:16.356375739Z" level=info msg="StopPodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" returns successfully" Jan 17 12:19:16.359362 containerd[2097]: time="2025-01-17T12:19:16.357444550Z" level=info msg="RemovePodSandbox for \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" Jan 17 12:19:16.359362 containerd[2097]: time="2025-01-17T12:19:16.357483426Z" level=info msg="Forcibly stopping sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\"" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.463 [WARNING][5644] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0", GenerateName:"calico-kube-controllers-9656dd96d-", Namespace:"calico-system", SelfLink:"", UID:"ebf0bd96-fd27-4fed-9e45-06b22eb36a4a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9656dd96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6", Pod:"calico-kube-controllers-9656dd96d-gsvkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f0f80c1e0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.463 [INFO][5644] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.463 [INFO][5644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" iface="eth0" netns="" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.463 [INFO][5644] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.463 [INFO][5644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.568 [INFO][5651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.569 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.569 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.583 [WARNING][5651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.583 [INFO][5651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" HandleID="k8s-pod-network.7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Workload="ip--172--31--23--9-k8s-calico--kube--controllers--9656dd96d--gsvkg-eth0" Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.586 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:16.594899 containerd[2097]: 2025-01-17 12:19:16.590 [INFO][5644] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367" Jan 17 12:19:16.596347 containerd[2097]: time="2025-01-17T12:19:16.595957145Z" level=info msg="TearDown network for sandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" successfully" Jan 17 12:19:16.601160 containerd[2097]: time="2025-01-17T12:19:16.601110206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:16.601305 containerd[2097]: time="2025-01-17T12:19:16.601187310Z" level=info msg="RemovePodSandbox \"7c64c09cb6e4389eb151acd9b1c9fdaa555394372f4f577a43176f4b1c3ee367\" returns successfully" Jan 17 12:19:16.602902 containerd[2097]: time="2025-01-17T12:19:16.602870982Z" level=info msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.718 [WARNING][5669] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61c77eba-5156-4cb8-a574-8dbe4d400655", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371", Pod:"csi-node-driver-79gdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7e5cafe35f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.719 [INFO][5669] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.719 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" iface="eth0" netns="" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.719 [INFO][5669] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.719 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.773 [INFO][5676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.774 [INFO][5676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.774 [INFO][5676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.786 [WARNING][5676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.786 [INFO][5676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.789 [INFO][5676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:16.796212 containerd[2097]: 2025-01-17 12:19:16.792 [INFO][5669] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.796212 containerd[2097]: time="2025-01-17T12:19:16.796064429Z" level=info msg="TearDown network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" successfully" Jan 17 12:19:16.796212 containerd[2097]: time="2025-01-17T12:19:16.796094366Z" level=info msg="StopPodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" returns successfully" Jan 17 12:19:16.797800 containerd[2097]: time="2025-01-17T12:19:16.797063552Z" level=info msg="RemovePodSandbox for \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" Jan 17 12:19:16.797800 containerd[2097]: time="2025-01-17T12:19:16.797097094Z" level=info msg="Forcibly stopping sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\"" Jan 17 12:19:16.888549 ntpd[2043]: Listen normally on 6 vxlan.calico 192.168.79.192:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 6 vxlan.calico 192.168.79.192:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 7 vxlan.calico [fe80::64a4:7cff:fee5:3883%4]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 8 cali7b72a81373e [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 9 calia7e5cafe35f [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 10 cali5f0f80c1e0f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 11 calidb1c9f67b64 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 12 cali75caa82e358 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:19:16.890793 ntpd[2043]: 17 Jan 12:19:16 ntpd[2043]: Listen normally on 13 cali8138e51da00 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:19:16.888637 ntpd[2043]: Listen normally on 7 vxlan.calico [fe80::64a4:7cff:fee5:3883%4]:123 Jan 17 12:19:16.888697 ntpd[2043]: Listen normally on 8 cali7b72a81373e [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:19:16.888738 ntpd[2043]: Listen normally on 9 calia7e5cafe35f [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:19:16.888774 ntpd[2043]: Listen normally on 10 cali5f0f80c1e0f [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:19:16.888813 ntpd[2043]: Listen normally on 11 calidb1c9f67b64 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:19:16.888878 ntpd[2043]: Listen normally on 12 cali75caa82e358 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:19:16.888922 ntpd[2043]: Listen normally on 13 cali8138e51da00 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.876 [WARNING][5696] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61c77eba-5156-4cb8-a574-8dbe4d400655", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371", Pod:"csi-node-driver-79gdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7e5cafe35f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.877 [INFO][5696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.877 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" iface="eth0" netns="" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.877 [INFO][5696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.877 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.951 [INFO][5703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.952 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.952 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.962 [WARNING][5703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.962 [INFO][5703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" HandleID="k8s-pod-network.57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Workload="ip--172--31--23--9-k8s-csi--node--driver--79gdb-eth0" Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.966 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:16.975546 containerd[2097]: 2025-01-17 12:19:16.971 [INFO][5696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41" Jan 17 12:19:16.975546 containerd[2097]: time="2025-01-17T12:19:16.975307961Z" level=info msg="TearDown network for sandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" successfully" Jan 17 12:19:16.980823 containerd[2097]: time="2025-01-17T12:19:16.980758247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:16.980969 containerd[2097]: time="2025-01-17T12:19:16.980887626Z" level=info msg="RemovePodSandbox \"57ec6f561d3e87dbdafbe4f5fba15b66d4d571b45d49e14e00a30149e1a84f41\" returns successfully" Jan 17 12:19:16.982151 containerd[2097]: time="2025-01-17T12:19:16.982118665Z" level=info msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.103 [WARNING][5721] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149e3208-6b3e-4a46-b0fa-9024d88c37c0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde", Pod:"coredns-76f75df574-42cdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b72a81373e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.105 [INFO][5721] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.105 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" iface="eth0" netns="" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.106 [INFO][5721] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.106 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.225 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.225 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.225 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.246 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.246 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.249 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:17.258002 containerd[2097]: 2025-01-17 12:19:17.255 [INFO][5721] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.259268 containerd[2097]: time="2025-01-17T12:19:17.259228605Z" level=info msg="TearDown network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" successfully" Jan 17 12:19:17.259364 containerd[2097]: time="2025-01-17T12:19:17.259349371Z" level=info msg="StopPodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" returns successfully" Jan 17 12:19:17.260477 containerd[2097]: time="2025-01-17T12:19:17.260101658Z" level=info msg="RemovePodSandbox for \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" Jan 17 12:19:17.260477 containerd[2097]: time="2025-01-17T12:19:17.260140451Z" level=info msg="Forcibly stopping sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\"" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.333 [WARNING][5745] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"149e3208-6b3e-4a46-b0fa-9024d88c37c0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"3ed54d3c60346c931b29decc8ca89de2ecf4dc7550bee49e0b44971acb560dde", Pod:"coredns-76f75df574-42cdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b72a81373e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.333 [INFO][5745] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.333 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" iface="eth0" netns="" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.333 [INFO][5745] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.333 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.381 [INFO][5751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.381 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.382 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.393 [WARNING][5751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.393 [INFO][5751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" HandleID="k8s-pod-network.e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--42cdl-eth0" Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.396 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:17.399660 containerd[2097]: 2025-01-17 12:19:17.398 [INFO][5745] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14" Jan 17 12:19:17.400717 containerd[2097]: time="2025-01-17T12:19:17.399706411Z" level=info msg="TearDown network for sandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" successfully" Jan 17 12:19:17.406689 containerd[2097]: time="2025-01-17T12:19:17.406638675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.409068 containerd[2097]: time="2025-01-17T12:19:17.408884876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:17.409068 containerd[2097]: time="2025-01-17T12:19:17.408959695Z" level=info msg="RemovePodSandbox \"e57e6eba0a55893cb943ae47c37d04bf1c161f21147d37c2de48ec2105fbcd14\" returns successfully" Jan 17 12:19:17.409456 containerd[2097]: time="2025-01-17T12:19:17.409425663Z" level=info msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" Jan 17 12:19:17.413199 containerd[2097]: time="2025-01-17T12:19:17.413152309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:19:17.413305 containerd[2097]: time="2025-01-17T12:19:17.413261072Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.417709 containerd[2097]: time="2025-01-17T12:19:17.417664800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.418086 containerd[2097]: time="2025-01-17T12:19:17.418053726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.600943514s" Jan 17 12:19:17.418172 containerd[2097]: time="2025-01-17T12:19:17.418086510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:19:17.420643 containerd[2097]: time="2025-01-17T12:19:17.419642376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:19:17.460216 containerd[2097]: time="2025-01-17T12:19:17.460165243Z" level=info msg="CreateContainer within sandbox \"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:19:17.500045 containerd[2097]: time="2025-01-17T12:19:17.499887884Z" level=info msg="CreateContainer within sandbox \"20d9c92e6243d7577258eb6453406413db1633c92e14bd5f1d185dd90b1c92e6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e28a4d15f9eca5ecc48b78d99b068e5a8aaf7b32e01750f9b64d2254f76856b\"" Jan 17 12:19:17.501183 containerd[2097]: time="2025-01-17T12:19:17.500818550Z" level=info msg="StartContainer for \"7e28a4d15f9eca5ecc48b78d99b068e5a8aaf7b32e01750f9b64d2254f76856b\"" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.537 [WARNING][5772] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b19cc1c-714d-4ba2-a8d1-0de091969729", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b", Pod:"coredns-76f75df574-skvd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb1c9f67b64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.537 [INFO][5772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.538 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" iface="eth0" netns="" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.538 [INFO][5772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.538 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.590 [INFO][5792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.590 [INFO][5792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.590 [INFO][5792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.600 [WARNING][5792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.600 [INFO][5792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.603 [INFO][5792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:17.610477 containerd[2097]: 2025-01-17 12:19:17.606 [INFO][5772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.611072 containerd[2097]: time="2025-01-17T12:19:17.610536602Z" level=info msg="TearDown network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" successfully" Jan 17 12:19:17.611072 containerd[2097]: time="2025-01-17T12:19:17.610563771Z" level=info msg="StopPodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" returns successfully" Jan 17 12:19:17.612964 containerd[2097]: time="2025-01-17T12:19:17.611341364Z" level=info msg="RemovePodSandbox for \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" Jan 17 12:19:17.612964 containerd[2097]: time="2025-01-17T12:19:17.611917604Z" level=info msg="Forcibly stopping sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\"" Jan 17 12:19:17.643792 containerd[2097]: time="2025-01-17T12:19:17.643748887Z" level=info msg="StartContainer for \"7e28a4d15f9eca5ecc48b78d99b068e5a8aaf7b32e01750f9b64d2254f76856b\" returns successfully" Jan 17 12:19:17.752699 systemd[1]: Started sshd@9-172.31.23.9:22-139.178.89.65:55318.service - OpenSSH per-connection server daemon (139.178.89.65:55318). Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.683 [WARNING][5829] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b19cc1c-714d-4ba2-a8d1-0de091969729", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"de2645f852bb499f3d268ebbca4ac9125033e3054950ad3ff6e17eaa4cbe118b", Pod:"coredns-76f75df574-skvd5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb1c9f67b64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.684 [INFO][5829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.684 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" iface="eth0" netns="" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.684 [INFO][5829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.684 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.731 [INFO][5839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.731 [INFO][5839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.731 [INFO][5839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.750 [WARNING][5839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.750 [INFO][5839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" HandleID="k8s-pod-network.9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Workload="ip--172--31--23--9-k8s-coredns--76f75df574--skvd5-eth0" Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.755 [INFO][5839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:17.764518 containerd[2097]: 2025-01-17 12:19:17.761 [INFO][5829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296" Jan 17 12:19:17.764518 containerd[2097]: time="2025-01-17T12:19:17.764437953Z" level=info msg="TearDown network for sandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" successfully" Jan 17 12:19:17.775338 containerd[2097]: time="2025-01-17T12:19:17.774959992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:17.775338 containerd[2097]: time="2025-01-17T12:19:17.775037829Z" level=info msg="RemovePodSandbox \"9af6522a56f7922b2036718b9f305651161cc28c3379d7b07a4bfbd8c8e91296\" returns successfully" Jan 17 12:19:17.776127 containerd[2097]: time="2025-01-17T12:19:17.776087057Z" level=info msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.868 [WARNING][5864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5958e1-c100-4633-9f03-22bc32367a23", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67", Pod:"calico-apiserver-8675f558fd-2mmb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75caa82e358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.869 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.869 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" iface="eth0" netns="" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.869 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.869 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.903 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.904 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.904 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.911 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.911 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.913 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:17.917871 containerd[2097]: 2025-01-17 12:19:17.914 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:17.917871 containerd[2097]: time="2025-01-17T12:19:17.916575735Z" level=info msg="TearDown network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" successfully" Jan 17 12:19:17.917871 containerd[2097]: time="2025-01-17T12:19:17.916605372Z" level=info msg="StopPodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" returns successfully" Jan 17 12:19:17.919426 containerd[2097]: time="2025-01-17T12:19:17.919392792Z" level=info msg="RemovePodSandbox for \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" Jan 17 12:19:17.919533 containerd[2097]: time="2025-01-17T12:19:17.919448662Z" level=info msg="Forcibly stopping sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\"" Jan 17 12:19:17.977245 sshd[5849]: Accepted publickey for core from 139.178.89.65 port 55318 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:17.981411 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:17.994150 systemd-logind[2059]: New session 10 of user core. Jan 17 12:19:18.004723 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:19:18.025099 kubelet[3501]: I0117 12:19:18.025067 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-42cdl" podStartSLOduration=49.025009192 podStartE2EDuration="49.025009192s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:12.953355449 +0000 UTC m=+57.082846969" watchObservedRunningTime="2025-01-17 12:19:18.025009192 +0000 UTC m=+62.154500708" Jan 17 12:19:18.027058 kubelet[3501]: I0117 12:19:18.026813 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9656dd96d-gsvkg" podStartSLOduration=33.562031985 podStartE2EDuration="39.026754105s" podCreationTimestamp="2025-01-17 12:18:39 +0000 UTC" firstStartedPulling="2025-01-17 12:19:11.954031396 +0000 UTC m=+56.083522891" lastFinishedPulling="2025-01-17 12:19:17.418753522 +0000 UTC m=+61.548245011" observedRunningTime="2025-01-17 12:19:18.018624553 +0000 UTC m=+62.148116087" watchObservedRunningTime="2025-01-17 12:19:18.026754105 +0000 UTC m=+62.156245588" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:17.970 [WARNING][5890] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5958e1-c100-4633-9f03-22bc32367a23", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67", Pod:"calico-apiserver-8675f558fd-2mmb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75caa82e358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:17.970 [INFO][5890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:17.970 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" iface="eth0" netns="" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:17.970 [INFO][5890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:17.970 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.041 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.041 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.042 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.052 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.053 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" HandleID="k8s-pod-network.d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--2mmb2-eth0" Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.056 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:18.061389 containerd[2097]: 2025-01-17 12:19:18.058 [INFO][5890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9" Jan 17 12:19:18.062398 containerd[2097]: time="2025-01-17T12:19:18.061429493Z" level=info msg="TearDown network for sandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" successfully" Jan 17 12:19:18.067495 containerd[2097]: time="2025-01-17T12:19:18.067352260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:18.068711 containerd[2097]: time="2025-01-17T12:19:18.067667016Z" level=info msg="RemovePodSandbox \"d44ce178cfd01afd6e4002a740a65429ca16990dd2781670208dc4c19ad0c7f9\" returns successfully" Jan 17 12:19:18.068711 containerd[2097]: time="2025-01-17T12:19:18.068232245Z" level=info msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.158 [WARNING][5932] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"20bd129c-9dbc-47e8-a882-de12365029b7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c", Pod:"calico-apiserver-8675f558fd-s9wh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8138e51da00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.158 [INFO][5932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.158 [INFO][5932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" iface="eth0" netns="" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.158 [INFO][5932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.158 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.188 [INFO][5945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.189 [INFO][5945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.189 [INFO][5945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.196 [WARNING][5945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.196 [INFO][5945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.199 [INFO][5945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:18.203155 containerd[2097]: 2025-01-17 12:19:18.201 [INFO][5932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.203155 containerd[2097]: time="2025-01-17T12:19:18.203110021Z" level=info msg="TearDown network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" successfully" Jan 17 12:19:18.203155 containerd[2097]: time="2025-01-17T12:19:18.203140110Z" level=info msg="StopPodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" returns successfully" Jan 17 12:19:18.205009 containerd[2097]: time="2025-01-17T12:19:18.203656168Z" level=info msg="RemovePodSandbox for \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" Jan 17 12:19:18.205009 containerd[2097]: time="2025-01-17T12:19:18.203688889Z" level=info msg="Forcibly stopping sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\"" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.259 [WARNING][5964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0", GenerateName:"calico-apiserver-8675f558fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"20bd129c-9dbc-47e8-a882-de12365029b7", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8675f558fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-9", ContainerID:"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c", Pod:"calico-apiserver-8675f558fd-s9wh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8138e51da00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.260 [INFO][5964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.260 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" iface="eth0" netns="" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.260 [INFO][5964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.260 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.299 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.299 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.300 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.310 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.310 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" HandleID="k8s-pod-network.1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Workload="ip--172--31--23--9-k8s-calico--apiserver--8675f558fd--s9wh8-eth0" Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.313 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:18.318595 containerd[2097]: 2025-01-17 12:19:18.315 [INFO][5964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08" Jan 17 12:19:18.321783 containerd[2097]: time="2025-01-17T12:19:18.318996921Z" level=info msg="TearDown network for sandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" successfully" Jan 17 12:19:18.326468 containerd[2097]: time="2025-01-17T12:19:18.326365631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:18.326581 containerd[2097]: time="2025-01-17T12:19:18.326474190Z" level=info msg="RemovePodSandbox \"1bcd4d17ba49b8bec6372cc2a3d77d50880d7ef79dc013f89186c5632983fc08\" returns successfully" Jan 17 12:19:18.453375 sshd[5849]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:18.461343 systemd-logind[2059]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:19:18.461573 systemd[1]: sshd@9-172.31.23.9:22-139.178.89.65:55318.service: Deactivated successfully. Jan 17 12:19:18.467294 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:19:18.468485 systemd-logind[2059]: Removed session 10. Jan 17 12:19:18.485266 systemd[1]: Started sshd@10-172.31.23.9:22-139.178.89.65:55334.service - OpenSSH per-connection server daemon (139.178.89.65:55334). Jan 17 12:19:18.640455 sshd[5983]: Accepted publickey for core from 139.178.89.65 port 55334 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:18.642415 sshd[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:18.648674 systemd-logind[2059]: New session 11 of user core. Jan 17 12:19:18.657435 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:19:19.272784 sshd[5983]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:19.296095 systemd[1]: Started sshd@11-172.31.23.9:22-139.178.89.65:55350.service - OpenSSH per-connection server daemon (139.178.89.65:55350). Jan 17 12:19:19.299668 systemd[1]: sshd@10-172.31.23.9:22-139.178.89.65:55334.service: Deactivated successfully. Jan 17 12:19:19.325863 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:19:19.331904 systemd-logind[2059]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:19:19.339706 systemd-logind[2059]: Removed session 11. Jan 17 12:19:19.525387 sshd[5992]: Accepted publickey for core from 139.178.89.65 port 55350 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:19.528716 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:19.540215 systemd-logind[2059]: New session 12 of user core. Jan 17 12:19:19.547369 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:19:20.025340 sshd[5992]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:20.034588 systemd[1]: sshd@11-172.31.23.9:22-139.178.89.65:55350.service: Deactivated successfully. Jan 17 12:19:20.044498 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:19:20.044670 systemd-logind[2059]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:19:20.048405 systemd-logind[2059]: Removed session 12. Jan 17 12:19:20.112776 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:20.115214 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:20.112910 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:20.666667 containerd[2097]: time="2025-01-17T12:19:20.666616266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.669173 containerd[2097]: time="2025-01-17T12:19:20.669109489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:19:20.671303 containerd[2097]: time="2025-01-17T12:19:20.671241675Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.676849 containerd[2097]: time="2025-01-17T12:19:20.674848071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.677201 containerd[2097]: time="2025-01-17T12:19:20.677126518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.257450175s" Jan 17 12:19:20.677924 containerd[2097]: time="2025-01-17T12:19:20.677896139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:19:20.679439 containerd[2097]: time="2025-01-17T12:19:20.679412992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:19:20.683808 containerd[2097]: time="2025-01-17T12:19:20.683763409Z" level=info msg="CreateContainer within sandbox \"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:19:20.715001 containerd[2097]: time="2025-01-17T12:19:20.714940149Z" level=info msg="CreateContainer within sandbox \"62151dab7a2805ebe45d57d3df5bc79c7c1a3f7e700a039e9998b0e2e663cf67\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2fe9c41e0a078f06bda115a3a553b507f8134e6979e1bd19adc4a4d98890b614\"" Jan 17 12:19:20.719593 containerd[2097]: time="2025-01-17T12:19:20.719126944Z" level=info msg="StartContainer for \"2fe9c41e0a078f06bda115a3a553b507f8134e6979e1bd19adc4a4d98890b614\"" Jan 17 12:19:20.830239 containerd[2097]: time="2025-01-17T12:19:20.830190746Z" level=info msg="StartContainer for \"2fe9c41e0a078f06bda115a3a553b507f8134e6979e1bd19adc4a4d98890b614\" returns successfully" Jan 17 12:19:21.045764 containerd[2097]: time="2025-01-17T12:19:21.045716612Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:21.048795 containerd[2097]: time="2025-01-17T12:19:21.048739995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:19:21.056132 containerd[2097]: time="2025-01-17T12:19:21.056075089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 376.618422ms" Jan 17 12:19:21.069063 containerd[2097]: time="2025-01-17T12:19:21.068634309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:19:21.079620 containerd[2097]: time="2025-01-17T12:19:21.078282169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:19:21.106893 containerd[2097]: time="2025-01-17T12:19:21.106724545Z" level=info msg="CreateContainer within sandbox \"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:19:21.128433 kubelet[3501]: I0117 12:19:21.128221 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8675f558fd-2mmb2" podStartSLOduration=34.592875625 podStartE2EDuration="43.123365125s" podCreationTimestamp="2025-01-17 12:18:38 +0000 UTC" firstStartedPulling="2025-01-17 12:19:12.148538939 +0000 UTC m=+56.278030432" lastFinishedPulling="2025-01-17 12:19:20.679028443 +0000 UTC m=+64.808519932" observedRunningTime="2025-01-17 12:19:21.122359545 +0000 UTC m=+65.251851051" watchObservedRunningTime="2025-01-17 12:19:21.123365125 +0000 UTC m=+65.252856627" Jan 17 12:19:21.151088 containerd[2097]: time="2025-01-17T12:19:21.150491731Z" level=info msg="CreateContainer within sandbox \"fb2d8ba058380a155c17ae9acd5c80fecd0022f48524cb0d3424955a4c16779c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"231784ea451395a4d8e33c27df0d658a45648ea3574575f70b2bd78555efb759\"" Jan 17 12:19:21.152657 containerd[2097]: time="2025-01-17T12:19:21.152235518Z" level=info msg="StartContainer for \"231784ea451395a4d8e33c27df0d658a45648ea3574575f70b2bd78555efb759\"" Jan 17 12:19:21.163710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399261655.mount: Deactivated successfully. Jan 17 12:19:21.336139 containerd[2097]: time="2025-01-17T12:19:21.335266611Z" level=info msg="StartContainer for \"231784ea451395a4d8e33c27df0d658a45648ea3574575f70b2bd78555efb759\" returns successfully" Jan 17 12:19:22.141653 kubelet[3501]: I0117 12:19:22.141612 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8675f558fd-s9wh8" podStartSLOduration=36.111697413 podStartE2EDuration="44.141543218s" podCreationTimestamp="2025-01-17 12:18:38 +0000 UTC" firstStartedPulling="2025-01-17 12:19:13.046696983 +0000 UTC m=+57.176188467" lastFinishedPulling="2025-01-17 12:19:21.07654278 +0000 UTC m=+65.206034272" observedRunningTime="2025-01-17 12:19:22.138994171 +0000 UTC m=+66.268485687" watchObservedRunningTime="2025-01-17 12:19:22.141543218 +0000 UTC m=+66.271034723" Jan 17 12:19:22.163777 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:22.162823 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:22.162878 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:22.848313 systemd[1]: run-containerd-runc-k8s.io-7e28a4d15f9eca5ecc48b78d99b068e5a8aaf7b32e01750f9b64d2254f76856b-runc.zmEa9I.mount: Deactivated successfully. Jan 17 12:19:23.108282 containerd[2097]: time="2025-01-17T12:19:23.108048911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:23.113550 containerd[2097]: time="2025-01-17T12:19:23.113499970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:19:23.117032 containerd[2097]: time="2025-01-17T12:19:23.116991574Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:23.124783 containerd[2097]: time="2025-01-17T12:19:23.124393160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:23.127745 containerd[2097]: time="2025-01-17T12:19:23.127577379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.049243212s" Jan 17 12:19:23.130764 containerd[2097]: time="2025-01-17T12:19:23.130551094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:19:23.136113 containerd[2097]: time="2025-01-17T12:19:23.136066724Z" level=info msg="CreateContainer within sandbox \"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:19:23.199873 containerd[2097]: time="2025-01-17T12:19:23.199384850Z" level=info msg="CreateContainer within sandbox \"9a51ce056d13e3ab6ea61a2f6de274f6c10164074fb09e0813742a535e045371\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7889b407dbb295895de91ee3b493cc9fef15c49aeeae9a774fd8aec33a787be\"" Jan 17 12:19:23.202877 containerd[2097]: time="2025-01-17T12:19:23.201632186Z" level=info msg="StartContainer for \"f7889b407dbb295895de91ee3b493cc9fef15c49aeeae9a774fd8aec33a787be\"" Jan 17 12:19:23.419446 containerd[2097]: time="2025-01-17T12:19:23.419322479Z" level=info msg="StartContainer for \"f7889b407dbb295895de91ee3b493cc9fef15c49aeeae9a774fd8aec33a787be\" returns successfully" Jan 17 12:19:24.171046 kubelet[3501]: I0117 12:19:24.170993 3501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-79gdb" podStartSLOduration=33.490393644 podStartE2EDuration="45.170936009s" podCreationTimestamp="2025-01-17 12:18:39 +0000 UTC" firstStartedPulling="2025-01-17 12:19:11.450769081 +0000 UTC m=+55.580260571" lastFinishedPulling="2025-01-17 12:19:23.131311451 +0000 UTC m=+67.260802936" observedRunningTime="2025-01-17 12:19:24.161280781 +0000 UTC m=+68.290772309" watchObservedRunningTime="2025-01-17 12:19:24.170936009 +0000 UTC m=+68.300427512" Jan 17 12:19:24.520939 kubelet[3501]: I0117 12:19:24.520749 3501 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:19:24.523031 kubelet[3501]: I0117 12:19:24.522951 3501 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:19:25.055114 systemd[1]: Started sshd@12-172.31.23.9:22-139.178.89.65:32960.service - OpenSSH per-connection server daemon (139.178.89.65:32960). Jan 17 12:19:25.379246 sshd[6165]: Accepted publickey for core from 139.178.89.65 port 32960 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:25.382452 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:25.390705 systemd-logind[2059]: New session 13 of user core. Jan 17 12:19:25.398519 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:19:26.134133 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:26.139717 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:26.134143 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:26.230621 sshd[6165]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.235606 systemd[1]: sshd@12-172.31.23.9:22-139.178.89.65:32960.service: Deactivated successfully. Jan 17 12:19:26.243065 systemd-logind[2059]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:19:26.243173 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:19:26.251691 systemd-logind[2059]: Removed session 13. Jan 17 12:19:28.177448 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:28.179049 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:28.177474 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:31.260436 systemd[1]: Started sshd@13-172.31.23.9:22-139.178.89.65:42444.service - OpenSSH per-connection server daemon (139.178.89.65:42444). Jan 17 12:19:31.430688 sshd[6191]: Accepted publickey for core from 139.178.89.65 port 42444 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:31.431353 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:31.437267 systemd-logind[2059]: New session 14 of user core. Jan 17 12:19:31.445340 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:19:31.721721 sshd[6191]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:31.728808 systemd[1]: sshd@13-172.31.23.9:22-139.178.89.65:42444.service: Deactivated successfully. Jan 17 12:19:31.737518 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:19:31.739027 systemd-logind[2059]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:19:31.740925 systemd-logind[2059]: Removed session 14. Jan 17 12:19:36.129048 systemd[1]: run-containerd-runc-k8s.io-b9e931e8c71cfa31e6da26c1bba475ab4ae832fb4dcee6008c1f4fa0b39b604a-runc.Be7TWw.mount: Deactivated successfully. Jan 17 12:19:36.749303 systemd[1]: Started sshd@14-172.31.23.9:22-139.178.89.65:42456.service - OpenSSH per-connection server daemon (139.178.89.65:42456). Jan 17 12:19:36.968600 sshd[6234]: Accepted publickey for core from 139.178.89.65 port 42456 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:36.972117 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:36.982200 systemd-logind[2059]: New session 15 of user core. Jan 17 12:19:36.986317 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:19:37.352181 sshd[6234]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:37.357340 systemd[1]: sshd@14-172.31.23.9:22-139.178.89.65:42456.service: Deactivated successfully. Jan 17 12:19:37.389146 systemd-logind[2059]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:19:37.392039 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:19:37.403600 systemd[1]: Started sshd@15-172.31.23.9:22-139.178.89.65:42468.service - OpenSSH per-connection server daemon (139.178.89.65:42468). Jan 17 12:19:37.404888 systemd-logind[2059]: Removed session 15. Jan 17 12:19:37.563459 sshd[6248]: Accepted publickey for core from 139.178.89.65 port 42468 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:37.565201 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:37.585473 systemd-logind[2059]: New session 16 of user core. Jan 17 12:19:37.594552 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:19:38.287977 sshd[6248]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:38.297381 systemd[1]: sshd@15-172.31.23.9:22-139.178.89.65:42468.service: Deactivated successfully. Jan 17 12:19:38.301739 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:19:38.303336 systemd-logind[2059]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:19:38.314284 systemd[1]: Started sshd@16-172.31.23.9:22-139.178.89.65:42470.service - OpenSSH per-connection server daemon (139.178.89.65:42470). Jan 17 12:19:38.316048 systemd-logind[2059]: Removed session 16. Jan 17 12:19:38.513514 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 42470 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:38.517718 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:38.526420 systemd-logind[2059]: New session 17 of user core. Jan 17 12:19:38.531804 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:19:41.831302 sshd[6260]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:41.841446 systemd[1]: sshd@16-172.31.23.9:22-139.178.89.65:42470.service: Deactivated successfully. Jan 17 12:19:41.847372 systemd-logind[2059]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:19:41.849136 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:19:41.876272 systemd[1]: Started sshd@17-172.31.23.9:22-139.178.89.65:46182.service - OpenSSH per-connection server daemon (139.178.89.65:46182). Jan 17 12:19:41.882958 systemd-logind[2059]: Removed session 17. Jan 17 12:19:42.104393 sshd[6280]: Accepted publickey for core from 139.178.89.65 port 46182 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:42.108174 sshd[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:42.121945 systemd-logind[2059]: New session 18 of user core. Jan 17 12:19:42.127269 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:19:42.132571 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:42.131706 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:42.131748 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:43.454115 sshd[6280]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:43.459962 systemd-logind[2059]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:19:43.462390 systemd[1]: sshd@17-172.31.23.9:22-139.178.89.65:46182.service: Deactivated successfully. Jan 17 12:19:43.466906 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:19:43.468630 systemd-logind[2059]: Removed session 18. Jan 17 12:19:43.482556 systemd[1]: Started sshd@18-172.31.23.9:22-139.178.89.65:46196.service - OpenSSH per-connection server daemon (139.178.89.65:46196). Jan 17 12:19:43.667786 sshd[6292]: Accepted publickey for core from 139.178.89.65 port 46196 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:43.670508 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:43.676933 systemd-logind[2059]: New session 19 of user core. Jan 17 12:19:43.685201 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:19:43.922198 sshd[6292]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:43.929238 systemd-logind[2059]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:19:43.930184 systemd[1]: sshd@18-172.31.23.9:22-139.178.89.65:46196.service: Deactivated successfully. Jan 17 12:19:43.934265 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:19:43.935571 systemd-logind[2059]: Removed session 19. Jan 17 12:19:44.176930 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:44.175922 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:44.175931 systemd-resolved[1972]: Flushed all caches. Jan 17 12:19:48.951225 systemd[1]: Started sshd@19-172.31.23.9:22-139.178.89.65:46204.service - OpenSSH per-connection server daemon (139.178.89.65:46204). Jan 17 12:19:49.120657 sshd[6316]: Accepted publickey for core from 139.178.89.65 port 46204 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:49.121730 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:49.128500 systemd-logind[2059]: New session 20 of user core. Jan 17 12:19:49.133299 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:19:49.419658 sshd[6316]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:49.425149 systemd[1]: sshd@19-172.31.23.9:22-139.178.89.65:46204.service: Deactivated successfully. Jan 17 12:19:49.434001 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:19:49.435231 systemd-logind[2059]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:19:49.436390 systemd-logind[2059]: Removed session 20. Jan 17 12:19:52.981068 systemd[1]: run-containerd-runc-k8s.io-7e28a4d15f9eca5ecc48b78d99b068e5a8aaf7b32e01750f9b64d2254f76856b-runc.C0jO3W.mount: Deactivated successfully. Jan 17 12:19:54.447256 systemd[1]: Started sshd@20-172.31.23.9:22-139.178.89.65:33400.service - OpenSSH per-connection server daemon (139.178.89.65:33400). Jan 17 12:19:54.639702 sshd[6353]: Accepted publickey for core from 139.178.89.65 port 33400 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:19:54.642333 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:54.650993 systemd-logind[2059]: New session 21 of user core. Jan 17 12:19:54.664239 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:19:55.150198 sshd[6353]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:55.156567 systemd[1]: sshd@20-172.31.23.9:22-139.178.89.65:33400.service: Deactivated successfully. Jan 17 12:19:55.165313 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:19:55.165425 systemd-logind[2059]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:19:55.169514 systemd-logind[2059]: Removed session 21. Jan 17 12:19:56.149118 systemd-journald[1570]: Under memory pressure, flushing caches. Jan 17 12:19:56.145407 systemd-resolved[1972]: Under memory pressure, flushing caches. Jan 17 12:19:56.145434 systemd-resolved[1972]: Flushed all caches. Jan 17 12:20:00.211975 systemd[1]: Started sshd@21-172.31.23.9:22-139.178.89.65:33406.service - OpenSSH per-connection server daemon (139.178.89.65:33406). Jan 17 12:20:00.400405 sshd[6367]: Accepted publickey for core from 139.178.89.65 port 33406 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:20:00.402312 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:00.408682 systemd-logind[2059]: New session 22 of user core. Jan 17 12:20:00.418627 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:20:00.826136 sshd[6367]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:00.837195 systemd[1]: sshd@21-172.31.23.9:22-139.178.89.65:33406.service: Deactivated successfully. Jan 17 12:20:00.843243 systemd-logind[2059]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:20:00.844972 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:20:00.850306 systemd-logind[2059]: Removed session 22. Jan 17 12:20:05.854589 systemd[1]: Started sshd@22-172.31.23.9:22-139.178.89.65:58256.service - OpenSSH per-connection server daemon (139.178.89.65:58256). Jan 17 12:20:06.037654 sshd[6383]: Accepted publickey for core from 139.178.89.65 port 58256 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:20:06.039374 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:06.048511 systemd-logind[2059]: New session 23 of user core. Jan 17 12:20:06.053422 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:20:06.499430 sshd[6383]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:06.505709 systemd[1]: sshd@22-172.31.23.9:22-139.178.89.65:58256.service: Deactivated successfully. Jan 17 12:20:06.510479 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:20:06.510713 systemd-logind[2059]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:20:06.515341 systemd-logind[2059]: Removed session 23. Jan 17 12:20:11.532349 systemd[1]: Started sshd@23-172.31.23.9:22-139.178.89.65:55180.service - OpenSSH per-connection server daemon (139.178.89.65:55180). Jan 17 12:20:11.745206 sshd[6419]: Accepted publickey for core from 139.178.89.65 port 55180 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:20:11.748800 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:11.773609 systemd-logind[2059]: New session 24 of user core. Jan 17 12:20:11.782003 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:20:12.072314 sshd[6419]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:12.076961 systemd[1]: sshd@23-172.31.23.9:22-139.178.89.65:55180.service: Deactivated successfully. Jan 17 12:20:12.088138 systemd-logind[2059]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:20:12.090279 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:20:12.098205 systemd-logind[2059]: Removed session 24. Jan 17 12:20:17.124436 systemd[1]: Started sshd@24-172.31.23.9:22-139.178.89.65:55196.service - OpenSSH per-connection server daemon (139.178.89.65:55196). Jan 17 12:20:17.309687 sshd[6435]: Accepted publickey for core from 139.178.89.65 port 55196 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:20:17.312093 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:17.324010 systemd-logind[2059]: New session 25 of user core. Jan 17 12:20:17.329347 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:20:17.658446 sshd[6435]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:17.662490 systemd[1]: sshd@24-172.31.23.9:22-139.178.89.65:55196.service: Deactivated successfully. Jan 17 12:20:17.671108 systemd-logind[2059]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:20:17.672215 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:20:17.674585 systemd-logind[2059]: Removed session 25.