Jan 13 21:28:39.062351 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:28:39.062393 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:28:39.062408 kernel: BIOS-provided physical RAM map: Jan 13 21:28:39.062418 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:28:39.062429 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:28:39.062439 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:28:39.062454 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 21:28:39.062465 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 21:28:39.062476 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 21:28:39.062488 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:28:39.062499 kernel: NX (Execute Disable) protection: active Jan 13 21:28:39.062510 kernel: APIC: Static calls initialized Jan 13 21:28:39.062522 kernel: SMBIOS 2.7 present. Jan 13 21:28:39.062533 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 21:28:39.062551 kernel: Hypervisor detected: KVM Jan 13 21:28:39.062564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:28:39.062578 kernel: kvm-clock: using sched offset of 6603847278 cycles Jan 13 21:28:39.062593 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:28:39.062607 kernel: tsc: Detected 2499.998 MHz processor Jan 13 21:28:39.062621 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:28:39.062635 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:28:39.062652 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 21:28:39.062666 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:28:39.062680 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:28:39.062693 kernel: Using GB pages for direct mapping Jan 13 21:28:39.062707 kernel: ACPI: Early table checksum verification disabled Jan 13 21:28:39.062721 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 21:28:39.062735 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 21:28:39.062749 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:28:39.062763 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 21:28:39.062780 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 21:28:39.062794 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:28:39.064196 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:28:39.064226 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 21:28:39.064240 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:28:39.064253 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 21:28:39.064267 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 21:28:39.064280 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:28:39.064293 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 21:28:39.064312 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 21:28:39.064331 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 21:28:39.064344 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 21:28:39.064358 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 21:28:39.064372 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 21:28:39.064389 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 21:28:39.064403 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 21:28:39.064416 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 21:28:39.064430 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 21:28:39.064444 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:28:39.064457 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:28:39.064471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 21:28:39.064485 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 21:28:39.064499 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 21:28:39.064516 kernel: Zone ranges: Jan 13 21:28:39.064530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:28:39.064544 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 21:28:39.064557 kernel: Normal empty Jan 13 21:28:39.064571 kernel: Movable zone start for each node Jan 13 21:28:39.064585 kernel: Early memory node ranges Jan 13 21:28:39.064598 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:28:39.064612 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 21:28:39.064675 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 21:28:39.064697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:28:39.064711 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:28:39.064726 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 21:28:39.064740 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:28:39.064754 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:28:39.064767 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 21:28:39.064781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:28:39.064796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:28:39.064864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:28:39.064880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:28:39.064898 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:28:39.064912 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:28:39.064926 kernel: TSC deadline timer available Jan 13 21:28:39.065104 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:28:39.065225 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:28:39.065239 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 21:28:39.065253 kernel: Booting paravirtualized kernel on KVM Jan 13 21:28:39.065265 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:28:39.065278 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:28:39.065296 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:28:39.065309 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:28:39.065321 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:28:39.065333 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:28:39.065345 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:28:39.065359 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:28:39.065372 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:28:39.065385 kernel: random: crng init done Jan 13 21:28:39.065400 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:28:39.065413 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:28:39.065426 kernel: Fallback order for Node 0: 0 Jan 13 21:28:39.065439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 21:28:39.065451 kernel: Policy zone: DMA32 Jan 13 21:28:39.065463 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:28:39.065477 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 13 21:28:39.065490 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:28:39.065505 kernel: Kernel/User page tables isolation: enabled Jan 13 21:28:39.065518 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:28:39.065530 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:28:39.065543 kernel: Dynamic Preempt: voluntary Jan 13 21:28:39.065556 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:28:39.065570 kernel: rcu: RCU event tracing is enabled. Jan 13 21:28:39.065583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:28:39.065597 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:28:39.065609 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:28:39.065622 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:28:39.065637 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:28:39.065650 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:28:39.065663 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:28:39.065675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:28:39.065688 kernel: Console: colour VGA+ 80x25 Jan 13 21:28:39.065700 kernel: printk: console [ttyS0] enabled Jan 13 21:28:39.065713 kernel: ACPI: Core revision 20230628 Jan 13 21:28:39.065726 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 21:28:39.065738 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:28:39.065754 kernel: x2apic enabled Jan 13 21:28:39.065767 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:28:39.065791 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:28:39.065831 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 21:28:39.065845 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:28:39.065858 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:28:39.065872 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:28:39.065884 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:28:39.065897 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:28:39.065911 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:28:39.065924 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 21:28:39.065938 kernel: RETBleed: Vulnerable Jan 13 21:28:39.065955 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:28:39.066104 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:28:39.066120 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:28:39.066134 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:28:39.066146 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:28:39.066160 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:28:39.066177 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:28:39.066191 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:28:39.066204 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:28:39.066217 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 21:28:39.066230 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 21:28:39.066244 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 21:28:39.066257 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 21:28:39.066271 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:28:39.066284 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:28:39.066297 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:28:39.066310 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 21:28:39.066326 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 21:28:39.067914 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 21:28:39.067936 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 21:28:39.067953 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 21:28:39.067970 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:28:39.068045 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:28:39.068063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:28:39.068079 kernel: landlock: Up and running. Jan 13 21:28:39.068096 kernel: SELinux: Initializing. Jan 13 21:28:39.068112 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:28:39.068128 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:28:39.068144 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 21:28:39.068165 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:28:39.068182 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:28:39.068541 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:28:39.068557 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 21:28:39.068572 kernel: signal: max sigframe size: 3632 Jan 13 21:28:39.068588 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:28:39.068602 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:28:39.068621 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:28:39.068640 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:28:39.068659 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:28:39.068672 kernel: .... node #0, CPUs: #1 Jan 13 21:28:39.068685 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:28:39.068701 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:28:39.068717 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:28:39.068730 kernel: smpboot: Max logical packages: 1 Jan 13 21:28:39.068744 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 21:28:39.068758 kernel: devtmpfs: initialized Jan 13 21:28:39.068776 kernel: x86/mm: Memory block size: 128MB Jan 13 21:28:39.068792 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:28:39.069320 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:28:39.069342 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:28:39.069356 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:28:39.069370 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:28:39.069384 kernel: audit: type=2000 audit(1736803718.459:1): state=initialized audit_enabled=0 res=1 Jan 13 21:28:39.069398 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:28:39.069488 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:28:39.069512 kernel: cpuidle: using governor menu Jan 13 21:28:39.069542 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:28:39.069619 kernel: dca service started, version 1.12.1 Jan 13 21:28:39.069636 kernel: PCI: Using configuration type 1 for base access Jan 13 21:28:39.069650 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:28:39.069665 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:28:39.069678 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:28:39.069692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:28:39.069706 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:28:39.069724 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:28:39.069738 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:28:39.069752 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:28:39.069766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:28:39.069780 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:28:39.069794 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:28:39.069939 kernel: ACPI: Interpreter enabled Jan 13 21:28:39.069953 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:28:39.069967 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:28:39.069984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:28:39.069998 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:28:39.070012 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:28:39.070025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:28:39.071484 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:28:39.071654 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:28:39.077255 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:28:39.077305 kernel: acpiphp: Slot [3] registered Jan 13 21:28:39.077321 kernel: acpiphp: Slot [4] registered Jan 13 21:28:39.077335 kernel: acpiphp: Slot [5] registered Jan 13 21:28:39.077349 kernel: acpiphp: Slot [6] registered Jan 13 21:28:39.077363 kernel: acpiphp: Slot [7] registered Jan 13 21:28:39.077377 kernel: acpiphp: Slot [8] registered Jan 13 21:28:39.077390 kernel: acpiphp: Slot [9] registered Jan 13 21:28:39.077403 kernel: acpiphp: Slot [10] registered Jan 13 21:28:39.077417 kernel: acpiphp: Slot [11] registered Jan 13 21:28:39.077536 kernel: acpiphp: Slot [12] registered Jan 13 21:28:39.077556 kernel: acpiphp: Slot [13] registered Jan 13 21:28:39.077570 kernel: acpiphp: Slot [14] registered Jan 13 21:28:39.077583 kernel: acpiphp: Slot [15] registered Jan 13 21:28:39.077597 kernel: acpiphp: Slot [16] registered Jan 13 21:28:39.077610 kernel: acpiphp: Slot [17] registered Jan 13 21:28:39.077623 kernel: acpiphp: Slot [18] registered Jan 13 21:28:39.077636 kernel: acpiphp: Slot [19] registered Jan 13 21:28:39.077651 kernel: acpiphp: Slot [20] registered Jan 13 21:28:39.077664 kernel: acpiphp: Slot [21] registered Jan 13 21:28:39.077680 kernel: acpiphp: Slot [22] registered Jan 13 21:28:39.077693 kernel: acpiphp: Slot [23] registered Jan 13 21:28:39.077707 kernel: acpiphp: Slot [24] registered Jan 13 21:28:39.077720 kernel: acpiphp: Slot [25] registered Jan 13 21:28:39.077734 kernel: acpiphp: Slot [26] registered Jan 13 21:28:39.077747 kernel: acpiphp: Slot [27] registered Jan 13 21:28:39.077760 kernel: acpiphp: Slot [28] registered Jan 13 21:28:39.077773 kernel: acpiphp: Slot [29] registered Jan 13 21:28:39.077786 kernel: acpiphp: Slot [30] registered Jan 13 21:28:39.077800 kernel: acpiphp: Slot [31] registered Jan 13 21:28:39.077840 kernel: PCI host bridge to bus 0000:00 Jan 13 21:28:39.078061 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:28:39.078213 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:28:39.078351 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:28:39.079035 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 21:28:39.079325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:28:39.079587 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:28:39.079762 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:28:39.083021 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 21:28:39.083232 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:28:39.083379 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 21:28:39.083514 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 21:28:39.083724 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 21:28:39.086308 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 21:28:39.086482 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 21:28:39.086634 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 21:28:39.086787 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 21:28:39.090062 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 20507 usecs Jan 13 21:28:39.090235 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 21:28:39.090377 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 21:28:39.090525 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 21:28:39.090664 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:28:39.090836 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:28:39.090982 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 21:28:39.091134 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:28:39.091527 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 21:28:39.091550 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:28:39.091571 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:28:39.091624 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:28:39.091639 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:28:39.091653 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:28:39.091668 kernel: iommu: Default domain type: Translated Jan 13 21:28:39.091683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:28:39.091697 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:28:39.091710 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:28:39.091725 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:28:39.091743 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 21:28:39.092060 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 21:28:39.092203 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 21:28:39.092352 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:28:39.092372 kernel: vgaarb: loaded Jan 13 21:28:39.092388 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 21:28:39.092495 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 21:28:39.092513 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:28:39.092529 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:28:39.092553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:28:39.092659 kernel: pnp: PnP ACPI init Jan 13 21:28:39.092678 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:28:39.092694 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:28:39.092711 kernel: NET: Registered PF_INET protocol family Jan 13 21:28:39.092727 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:28:39.092742 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:28:39.092759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:28:39.092780 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:28:39.092798 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:28:39.094859 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:28:39.094879 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:28:39.094898 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:28:39.094915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:28:39.094934 kernel: NET: Registered PF_XDP protocol family Jan 13 21:28:39.095089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:28:39.095282 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:28:39.095674 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:28:39.095859 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 21:28:39.096063 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:28:39.096088 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:28:39.096106 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:28:39.096123 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:28:39.096141 kernel: clocksource: Switched to clocksource tsc Jan 13 21:28:39.096158 kernel: Initialise system trusted keyrings Jan 13 21:28:39.096230 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:28:39.096249 kernel: Key type asymmetric registered Jan 13 21:28:39.096266 kernel: Asymmetric key parser 'x509' registered Jan 13 21:28:39.096429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:28:39.096451 kernel: io scheduler mq-deadline registered Jan 13 21:28:39.096468 kernel: io scheduler kyber registered Jan 13 21:28:39.096486 kernel: io scheduler bfq registered Jan 13 21:28:39.096503 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:28:39.096520 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:28:39.096542 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:28:39.096558 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:28:39.096575 kernel: i8042: Warning: Keylock active Jan 13 21:28:39.096591 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:28:39.096609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:28:39.096772 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:28:39.099390 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:28:39.099531 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:28:38 UTC (1736803718) Jan 13 21:28:39.099658 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:28:39.099676 kernel: intel_pstate: CPU model not supported Jan 13 21:28:39.099692 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:28:39.099706 kernel: Segment Routing with IPv6 Jan 13 21:28:39.099721 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:28:39.099736 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:28:39.099758 kernel: Key type dns_resolver registered Jan 13 21:28:39.099772 kernel: IPI shorthand broadcast: enabled Jan 13 21:28:39.099787 kernel: sched_clock: Marking stable (743002758, 368831605)->(1238632325, -126797962) Jan 13 21:28:39.100844 kernel: registered taskstats version 1 Jan 13 21:28:39.100866 kernel: Loading compiled-in X.509 certificates Jan 13 21:28:39.100882 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:28:39.100898 kernel: Key type .fscrypt registered Jan 13 21:28:39.100914 kernel: Key type fscrypt-provisioning registered Jan 13 21:28:39.100929 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:28:39.100945 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:28:39.100961 kernel: ima: No architecture policies found Jan 13 21:28:39.100983 kernel: clk: Disabling unused clocks Jan 13 21:28:39.101001 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:28:39.101019 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:28:39.101036 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:28:39.101053 kernel: Run /init as init process Jan 13 21:28:39.101070 kernel: with arguments: Jan 13 21:28:39.101087 kernel: /init Jan 13 21:28:39.101104 kernel: with environment: Jan 13 21:28:39.101121 kernel: HOME=/ Jan 13 21:28:39.101138 kernel: TERM=linux Jan 13 21:28:39.101160 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:28:39.101210 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:28:39.101233 systemd[1]: Detected virtualization amazon. Jan 13 21:28:39.101253 systemd[1]: Detected architecture x86-64. Jan 13 21:28:39.101271 systemd[1]: Running in initrd. Jan 13 21:28:39.101290 systemd[1]: No hostname configured, using default hostname. Jan 13 21:28:39.101309 systemd[1]: Hostname set to . Jan 13 21:28:39.101333 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:28:39.101352 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:28:39.101371 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:28:39.101390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:28:39.101410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:28:39.101430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:28:39.101449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:28:39.101472 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:28:39.101495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:28:39.101514 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:28:39.101534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:28:39.101554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:28:39.101573 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:28:39.101592 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:28:39.101615 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:28:39.101634 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:28:39.101653 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:28:39.101673 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:28:39.101693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:28:39.101712 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:28:39.101731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:28:39.101750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:28:39.101769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:28:39.101793 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:28:39.102852 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:28:39.102879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:28:39.102900 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:28:39.102921 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:28:39.102951 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:28:39.102972 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:28:39.102992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:28:39.103055 systemd-journald[178]: Collecting audit messages is disabled. Jan 13 21:28:39.103103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:28:39.103124 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:28:39.103144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:28:39.103164 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:28:39.103186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:28:39.103210 systemd-journald[178]: Journal started Jan 13 21:28:39.103247 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2805aaf46a3a0ed293a6c07f9d8ac5) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:28:39.068668 systemd-modules-load[179]: Inserted module 'overlay' Jan 13 21:28:39.109209 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:28:39.130060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:28:39.225520 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:28:39.225558 kernel: Bridge firewalling registered Jan 13 21:28:39.150118 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 13 21:28:39.231267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:28:39.237869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:28:39.240543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:28:39.258029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:28:39.261467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:28:39.264113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:28:39.264786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:28:39.299921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:28:39.304184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:28:39.313438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:28:39.318149 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:28:39.325063 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:28:39.350534 dracut-cmdline[214]: dracut-dracut-053 Jan 13 21:28:39.356480 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:28:39.384182 systemd-resolved[208]: Positive Trust Anchors: Jan 13 21:28:39.384207 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:28:39.384272 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:28:39.389144 systemd-resolved[208]: Defaulting to hostname 'linux'. Jan 13 21:28:39.390783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:28:39.393114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:28:39.484839 kernel: SCSI subsystem initialized Jan 13 21:28:39.500842 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:28:39.516155 kernel: iscsi: registered transport (tcp) Jan 13 21:28:39.543839 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:28:39.543916 kernel: QLogic iSCSI HBA Driver Jan 13 21:28:39.659450 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:28:39.665297 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:28:39.700836 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:28:39.700919 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:28:39.702554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:28:39.755840 kernel: raid6: avx512x4 gen() 5021 MB/s Jan 13 21:28:39.772869 kernel: raid6: avx512x2 gen() 1836 MB/s Jan 13 21:28:39.792838 kernel: raid6: avx512x1 gen() 6624 MB/s Jan 13 21:28:39.809865 kernel: raid6: avx2x4 gen() 6232 MB/s Jan 13 21:28:39.826843 kernel: raid6: avx2x2 gen() 13568 MB/s Jan 13 21:28:39.843835 kernel: raid6: avx2x1 gen() 12180 MB/s Jan 13 21:28:39.843930 kernel: raid6: using algorithm avx2x2 gen() 13568 MB/s Jan 13 21:28:39.861441 kernel: raid6: .... xor() 14241 MB/s, rmw enabled Jan 13 21:28:39.861612 kernel: raid6: using avx512x2 recovery algorithm Jan 13 21:28:39.887833 kernel: xor: automatically using best checksumming function avx Jan 13 21:28:40.081844 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:28:40.093033 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:28:40.101014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:28:40.128289 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 21:28:40.133676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:28:40.146601 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:28:40.173449 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 13 21:28:40.215751 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:28:40.223118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:28:40.301748 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:28:40.312122 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:28:40.342351 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:28:40.345652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:28:40.348636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:28:40.352722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:28:40.365114 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:28:40.385845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:28:40.411384 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:28:40.435463 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:28:40.435660 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 21:28:40.435835 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:28:40.435857 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:55:e6:c9:d7:6f Jan 13 21:28:40.438915 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:28:40.446330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:28:40.446524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:28:40.451938 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:28:40.453138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:28:40.453334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:28:40.464963 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:28:40.470957 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:28:40.471021 kernel: AES CTR mode by8 optimization enabled Jan 13 21:28:40.472167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:28:40.501681 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:28:40.502011 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:28:40.512832 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:28:40.518832 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:28:40.518901 kernel: GPT:9289727 != 16777215 Jan 13 21:28:40.518926 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:28:40.518945 kernel: GPT:9289727 != 16777215 Jan 13 21:28:40.518961 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:28:40.518978 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:28:40.619849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:28:40.629142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:28:40.642833 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (445) Jan 13 21:28:40.654838 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (451) Jan 13 21:28:40.673052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:28:40.695667 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:28:40.732440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:28:40.749969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:28:40.757130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:28:40.757295 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:28:40.770535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:28:40.790940 disk-uuid[629]: Primary Header is updated. Jan 13 21:28:40.790940 disk-uuid[629]: Secondary Entries is updated. Jan 13 21:28:40.790940 disk-uuid[629]: Secondary Header is updated. Jan 13 21:28:40.795851 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:28:40.802915 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:28:40.819842 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:28:41.814993 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:28:41.815423 disk-uuid[630]: The operation has completed successfully. Jan 13 21:28:41.977867 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:28:41.978000 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:28:42.009022 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:28:42.015299 sh[973]: Success Jan 13 21:28:42.039832 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:28:42.147632 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:28:42.161098 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:28:42.164840 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:28:42.220848 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:28:42.220920 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:28:42.226948 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:28:42.227016 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:28:42.227037 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:28:42.345892 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:28:42.362771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:28:42.365197 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:28:42.370978 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:28:42.374012 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:28:42.406315 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:28:42.406394 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:28:42.406428 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:28:42.414242 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:28:42.430485 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:28:42.429922 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:28:42.437283 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:28:42.446081 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:28:42.508534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:28:42.520157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:28:42.564137 systemd-networkd[1166]: lo: Link UP Jan 13 21:28:42.564148 systemd-networkd[1166]: lo: Gained carrier Jan 13 21:28:42.567420 systemd-networkd[1166]: Enumeration completed Jan 13 21:28:42.567759 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:28:42.567763 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:28:42.568872 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:28:42.570631 systemd[1]: Reached target network.target - Network. Jan 13 21:28:42.593920 systemd-networkd[1166]: eth0: Link UP Jan 13 21:28:42.593932 systemd-networkd[1166]: eth0: Gained carrier Jan 13 21:28:42.593951 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:28:42.628201 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.18.253/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:28:42.697610 ignition[1112]: Ignition 2.19.0 Jan 13 21:28:42.698050 ignition[1112]: Stage: fetch-offline Jan 13 21:28:42.698325 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:42.698336 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:42.699506 ignition[1112]: Ignition finished successfully Jan 13 21:28:42.702746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:28:42.709017 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:28:42.725736 ignition[1177]: Ignition 2.19.0 Jan 13 21:28:42.725749 ignition[1177]: Stage: fetch Jan 13 21:28:42.726210 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:42.726223 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:42.727530 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:42.766403 ignition[1177]: PUT result: OK Jan 13 21:28:42.773173 ignition[1177]: parsed url from cmdline: "" Jan 13 21:28:42.773184 ignition[1177]: no config URL provided Jan 13 21:28:42.773195 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:28:42.773211 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:28:42.773627 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:42.774847 ignition[1177]: PUT result: OK Jan 13 21:28:42.774896 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:28:42.789165 ignition[1177]: GET result: OK Jan 13 21:28:42.789327 ignition[1177]: parsing config with SHA512: 9466311bea33e85727952bcb99f684e94106e786d670eca0afa8c29d28a7f1dedbd5114f4fbbdf2703d3bba15d85164f11ba8c0defcd0d0ba90411df2ebb9b02 Jan 13 21:28:42.801666 unknown[1177]: fetched base config from "system" Jan 13 21:28:42.802657 unknown[1177]: fetched base config from "system" Jan 13 21:28:42.802666 unknown[1177]: fetched user config from "aws" Jan 13 21:28:42.805121 ignition[1177]: fetch: fetch complete Jan 13 21:28:42.805131 ignition[1177]: fetch: fetch passed Jan 13 21:28:42.805194 ignition[1177]: Ignition finished successfully Jan 13 21:28:42.807640 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:28:42.817016 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:28:42.833868 ignition[1183]: Ignition 2.19.0 Jan 13 21:28:42.833880 ignition[1183]: Stage: kargs Jan 13 21:28:42.834230 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:42.834239 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:42.834321 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:42.836831 ignition[1183]: PUT result: OK Jan 13 21:28:42.842796 ignition[1183]: kargs: kargs passed Jan 13 21:28:42.842982 ignition[1183]: Ignition finished successfully Jan 13 21:28:42.846790 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:28:42.857131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:28:42.872834 ignition[1190]: Ignition 2.19.0 Jan 13 21:28:42.872848 ignition[1190]: Stage: disks Jan 13 21:28:42.873205 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:42.873215 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:42.873302 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:42.875742 ignition[1190]: PUT result: OK Jan 13 21:28:42.881536 ignition[1190]: disks: disks passed Jan 13 21:28:42.881603 ignition[1190]: Ignition finished successfully Jan 13 21:28:42.884424 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:28:42.884854 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:28:42.888919 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:28:42.891644 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:28:42.893627 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:28:42.894837 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:28:42.900980 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:28:42.945910 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:28:42.949391 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:28:42.953156 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:28:43.261884 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:28:43.264309 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:28:43.270062 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:28:43.280961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:28:43.293535 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:28:43.301630 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:28:43.302004 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:28:43.302038 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:28:43.313184 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:28:43.320245 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:28:43.325830 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1218) Jan 13 21:28:43.328644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:28:43.328698 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:28:43.328718 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:28:43.345362 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:28:43.348073 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:28:43.628816 initrd-setup-root[1247]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:28:43.636232 initrd-setup-root[1254]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:28:43.658762 initrd-setup-root[1261]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:28:43.668553 initrd-setup-root[1268]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:28:43.982216 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:28:43.990046 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:28:43.997063 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:28:44.004931 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:28:44.006676 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:28:44.037840 ignition[1337]: INFO : Ignition 2.19.0 Jan 13 21:28:44.037840 ignition[1337]: INFO : Stage: mount Jan 13 21:28:44.037840 ignition[1337]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:44.037840 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:44.037840 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:44.045347 ignition[1337]: INFO : PUT result: OK Jan 13 21:28:44.050908 ignition[1337]: INFO : mount: mount passed Jan 13 21:28:44.050908 ignition[1337]: INFO : Ignition finished successfully Jan 13 21:28:44.055826 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:28:44.064038 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:28:44.065580 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:28:44.268117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:28:44.307832 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1349) Jan 13 21:28:44.310193 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:28:44.310251 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:28:44.310271 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:28:44.315834 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:28:44.318310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:28:44.347695 ignition[1366]: INFO : Ignition 2.19.0 Jan 13 21:28:44.347695 ignition[1366]: INFO : Stage: files Jan 13 21:28:44.350260 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:44.350260 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:44.350260 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:44.350260 ignition[1366]: INFO : PUT result: OK Jan 13 21:28:44.360057 ignition[1366]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:28:44.362566 ignition[1366]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:28:44.362566 ignition[1366]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:28:44.368989 ignition[1366]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:28:44.370682 ignition[1366]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:28:44.373144 unknown[1366]: wrote ssh authorized keys file for user: core Jan 13 21:28:44.385059 ignition[1366]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:28:44.388711 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:28:44.388711 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:28:44.491574 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:28:44.592002 systemd-networkd[1166]: eth0: Gained IPv6LL Jan 13 21:28:44.661359 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:28:44.663747 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:28:44.666514 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:28:44.666514 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:28:44.673373 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:28:44.673373 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:28:44.678991 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:28:44.678991 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:28:44.682930 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:28:44.690869 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:28:44.692903 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:28:44.692903 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:28:44.692903 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:28:44.692903 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:28:44.692903 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:28:45.138997 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:28:45.490539 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:28:45.490539 ignition[1366]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:28:45.497545 ignition[1366]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:28:45.501254 ignition[1366]: INFO : files: files passed Jan 13 21:28:45.501254 ignition[1366]: INFO : Ignition finished successfully Jan 13 21:28:45.514385 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:28:45.526057 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:28:45.530195 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:28:45.533924 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:28:45.534064 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:28:45.555245 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:28:45.555245 initrd-setup-root-after-ignition[1394]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:28:45.561898 initrd-setup-root-after-ignition[1398]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:28:45.565751 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:28:45.566321 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:28:45.581107 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:28:45.647031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:28:45.648785 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:28:45.652270 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:28:45.654023 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:28:45.658307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:28:45.665150 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:28:45.694760 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:28:45.703083 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:28:45.715880 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:28:45.719781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:28:45.721382 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:28:45.726041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:28:45.726216 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:28:45.731616 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:28:45.734926 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:28:45.740728 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:28:45.747678 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:28:45.747854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:28:45.755307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:28:45.756553 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:28:45.763009 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:28:45.764487 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:28:45.765864 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:28:45.770039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:28:45.770206 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:28:45.775032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:28:45.780724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:28:45.784828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:28:45.784913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:28:45.788929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:28:45.789070 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:28:45.792155 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:28:45.792294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:28:45.797304 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:28:45.797430 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:28:45.812080 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:28:45.812335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:28:45.812476 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:28:45.822236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:28:45.824578 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:28:45.826434 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:28:45.830702 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:28:45.830984 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:28:45.837833 ignition[1418]: INFO : Ignition 2.19.0 Jan 13 21:28:45.837833 ignition[1418]: INFO : Stage: umount Jan 13 21:28:45.841208 ignition[1418]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:28:45.841208 ignition[1418]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:28:45.841208 ignition[1418]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:28:45.841208 ignition[1418]: INFO : PUT result: OK Jan 13 21:28:45.841343 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:28:45.841519 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:28:45.850409 ignition[1418]: INFO : umount: umount passed Jan 13 21:28:45.850409 ignition[1418]: INFO : Ignition finished successfully Jan 13 21:28:45.852539 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:28:45.852681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:28:45.861994 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:28:45.862332 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:28:45.864389 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:28:45.864472 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:28:45.865498 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:28:45.865556 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:28:45.865770 systemd[1]: Stopped target network.target - Network. Jan 13 21:28:45.866843 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:28:45.866906 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:28:45.867156 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:28:45.867279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:28:45.871834 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:28:45.879941 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:28:45.884169 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:28:45.887989 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:28:45.888220 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:28:45.889923 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:28:45.889982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:28:45.891125 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:28:45.891195 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:28:45.893751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:28:45.893833 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:28:45.896495 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:28:45.898449 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:28:45.901842 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:28:45.902882 systemd-networkd[1166]: eth0: DHCPv6 lease lost Jan 13 21:28:45.904973 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:28:45.905260 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:28:45.912103 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:28:45.912252 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:28:45.917241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:28:45.917296 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:28:45.930086 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:28:45.932510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:28:45.932710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:28:45.934984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:28:45.935052 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:28:45.937559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:28:45.937627 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:28:45.940486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:28:45.940540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:28:45.945361 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:28:45.965262 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:28:45.965420 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:28:45.969137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:28:45.969254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:28:45.972912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:28:45.972956 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:28:45.974028 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:28:45.974075 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:28:45.977206 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:28:45.977256 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:28:45.980169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:28:45.980226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:28:45.996680 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:28:46.000017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:28:46.000396 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:28:46.005692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:28:46.005800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:28:46.015601 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:28:46.015724 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:28:46.024764 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:28:46.025110 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:28:46.519109 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:28:46.519440 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:28:46.520032 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:28:46.522686 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:28:46.523762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:28:46.532015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:28:46.550413 systemd[1]: Switching root. Jan 13 21:28:46.598175 systemd-journald[178]: Journal stopped Jan 13 21:28:48.471217 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 13 21:28:48.471529 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:28:48.471556 kernel: SELinux: policy capability open_perms=1 Jan 13 21:28:48.471577 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:28:48.471596 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:28:48.471614 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:28:48.471634 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:28:48.471660 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:28:48.471737 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:28:48.471788 kernel: audit: type=1403 audit(1736803727.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:28:48.471839 systemd[1]: Successfully loaded SELinux policy in 82.228ms. Jan 13 21:28:48.471881 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.784ms. Jan 13 21:28:48.471902 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:28:48.471921 systemd[1]: Detected virtualization amazon. Jan 13 21:28:48.471940 systemd[1]: Detected architecture x86-64. Jan 13 21:28:48.471958 systemd[1]: Detected first boot. Jan 13 21:28:48.471980 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:28:48.472012 zram_generator::config[1461]: No configuration found. Jan 13 21:28:48.472113 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:28:48.472135 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:28:48.472155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:28:48.472173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:28:48.472192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:28:48.472211 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:28:48.472234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:28:48.472252 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:28:48.472271 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:28:48.472924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:28:48.472950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:28:48.472969 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:28:48.472988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:28:48.473007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:28:48.473026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:28:48.473057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:28:48.473077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:28:48.473096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:28:48.473115 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:28:48.473134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:28:48.473153 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:28:48.473172 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:28:48.473235 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:28:48.473268 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:28:48.473287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:28:48.473304 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:28:48.473376 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:28:48.473411 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:28:48.473439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:28:48.473460 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:28:48.474487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:28:48.474533 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:28:48.474561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:28:48.474583 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:28:48.474606 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:28:48.474627 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:28:48.474648 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:28:48.474670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:28:48.474691 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:28:48.474712 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:28:48.474736 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:28:48.474758 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:28:48.474780 systemd[1]: Reached target machines.target - Containers. Jan 13 21:28:48.474823 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:28:48.474845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:28:48.474866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:28:48.474887 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:28:48.474908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:28:48.474929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:28:48.474953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:28:48.474974 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:28:48.474995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:28:48.475016 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:28:48.475037 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:28:48.475058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:28:48.475078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:28:48.475151 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:28:48.475179 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:28:48.475201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:28:48.475221 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:28:48.475242 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:28:48.475843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:28:48.475874 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:28:48.475896 systemd[1]: Stopped verity-setup.service. Jan 13 21:28:48.475915 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:28:48.475936 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:28:48.475964 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:28:48.475984 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:28:48.476005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:28:48.476023 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:28:48.476043 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:28:48.476078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:28:48.476139 systemd-journald[1540]: Collecting audit messages is disabled. Jan 13 21:28:48.476182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:28:48.476201 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:28:48.476219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:28:48.476240 systemd-journald[1540]: Journal started Jan 13 21:28:48.476278 systemd-journald[1540]: Runtime Journal (/run/log/journal/ec2805aaf46a3a0ed293a6c07f9d8ac5) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:28:48.029864 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:28:48.077290 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:28:48.479660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:28:48.077760 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:28:48.482125 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:28:48.487287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:28:48.487560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:28:48.493467 kernel: loop: module loaded Jan 13 21:28:48.491051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:28:48.493064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:28:48.497786 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:28:48.503324 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:28:48.503541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:28:48.523233 kernel: fuse: init (API version 7.39) Jan 13 21:28:48.529636 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:28:48.531232 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:28:48.536616 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:28:48.550118 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:28:48.569079 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:28:48.570549 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:28:48.570608 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:28:48.574518 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:28:48.586548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:28:48.596356 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:28:48.598509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:28:48.606675 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:28:48.611997 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:28:48.613453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:28:48.628218 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:28:48.629746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:28:48.647148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:28:48.681847 kernel: ACPI: bus type drm_connector registered Jan 13 21:28:48.683787 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:28:48.690887 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:28:48.695207 systemd-journald[1540]: Time spent on flushing to /var/log/journal/ec2805aaf46a3a0ed293a6c07f9d8ac5 is 122.710ms for 954 entries. Jan 13 21:28:48.695207 systemd-journald[1540]: System Journal (/var/log/journal/ec2805aaf46a3a0ed293a6c07f9d8ac5) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:28:48.843039 systemd-journald[1540]: Received client request to flush runtime journal. Jan 13 21:28:48.843107 kernel: loop0: detected capacity change from 0 to 61336 Jan 13 21:28:48.695027 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:28:48.697153 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:28:48.700060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:28:48.707929 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:28:48.713523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:28:48.721088 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:28:48.733098 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:28:48.734607 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:28:48.734830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:28:48.768839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:28:48.781039 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:28:48.815508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:28:48.845724 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:28:48.858629 udevadm[1597]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:28:48.880290 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:28:48.882036 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:28:48.897851 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:28:48.899708 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:28:48.907999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:28:48.935938 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:28:48.966681 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jan 13 21:28:48.966714 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jan 13 21:28:48.975545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:28:49.042830 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:28:49.151840 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 21:28:49.281952 kernel: loop4: detected capacity change from 0 to 61336 Jan 13 21:28:49.321955 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:28:49.381901 kernel: loop6: detected capacity change from 0 to 140768 Jan 13 21:28:49.419846 kernel: loop7: detected capacity change from 0 to 205544 Jan 13 21:28:49.472902 (sd-merge)[1613]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:28:49.474089 (sd-merge)[1613]: Merged extensions into '/usr'. Jan 13 21:28:49.486877 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:28:49.487245 systemd[1]: Reloading... Jan 13 21:28:49.638902 zram_generator::config[1635]: No configuration found. Jan 13 21:28:49.926347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:28:50.042943 systemd[1]: Reloading finished in 554 ms. Jan 13 21:28:50.078713 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:28:50.093336 systemd[1]: Starting ensure-sysext.service... Jan 13 21:28:50.104543 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:28:50.106403 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:28:50.117084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:28:50.128942 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:28:50.128971 systemd[1]: Reloading... Jan 13 21:28:50.146358 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:28:50.147636 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:28:50.149758 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:28:50.150314 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jan 13 21:28:50.150470 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jan 13 21:28:50.159124 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:28:50.161780 systemd-tmpfiles[1688]: Skipping /boot Jan 13 21:28:50.197270 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Jan 13 21:28:50.209283 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:28:50.209436 systemd-tmpfiles[1688]: Skipping /boot Jan 13 21:28:50.308852 zram_generator::config[1720]: No configuration found. Jan 13 21:28:50.365838 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:28:50.510119 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:28:50.585415 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:28:50.597318 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:28:50.597414 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 13 21:28:50.601841 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:28:50.669756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:28:50.686908 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 21:28:50.722904 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:28:50.749839 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:28:50.763869 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1737) Jan 13 21:28:50.847537 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:28:50.847841 systemd[1]: Reloading finished in 718 ms. Jan 13 21:28:50.869619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:28:50.873067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:28:50.882895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:28:50.988428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:28:50.991761 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:28:50.998250 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:28:51.007414 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:28:51.015475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:28:51.017320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:28:51.020221 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:28:51.031231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:28:51.047245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:28:51.057185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:28:51.063198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:28:51.064716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:28:51.103008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:28:51.107685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:28:51.108081 lvm[1881]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:28:51.113509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:28:51.120196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:28:51.121953 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:28:51.130273 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:28:51.142528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:28:51.144399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:28:51.164033 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:28:51.166138 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:28:51.168629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:28:51.176213 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:28:51.178436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:28:51.178654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:28:51.182324 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:28:51.183126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:28:51.194595 systemd[1]: Finished ensure-sysext.service. Jan 13 21:28:51.199328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:28:51.199895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:28:51.208874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:28:51.242675 lvm[1902]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:28:51.251113 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:28:51.257976 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:28:51.268434 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:28:51.270323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:28:51.273179 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:28:51.287426 augenrules[1917]: No rules Jan 13 21:28:51.291490 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:28:51.294554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:28:51.310459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:28:51.336208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:28:51.353501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:28:51.364170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:28:51.387398 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:28:51.530091 systemd-networkd[1894]: lo: Link UP Jan 13 21:28:51.530107 systemd-networkd[1894]: lo: Gained carrier Jan 13 21:28:51.533550 systemd-networkd[1894]: Enumeration completed Jan 13 21:28:51.534150 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:28:51.534256 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:28:51.537429 systemd-networkd[1894]: eth0: Link UP Jan 13 21:28:51.537633 systemd-networkd[1894]: eth0: Gained carrier Jan 13 21:28:51.537660 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:28:51.550905 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.18.253/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:28:51.551645 systemd-resolved[1895]: Positive Trust Anchors: Jan 13 21:28:51.551664 systemd-resolved[1895]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:28:51.551729 systemd-resolved[1895]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:28:51.563387 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:28:51.565091 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:28:51.566491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:28:51.572012 systemd-resolved[1895]: Defaulting to hostname 'linux'. Jan 13 21:28:51.577276 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:28:51.579100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:28:51.581262 systemd[1]: Reached target network.target - Network. Jan 13 21:28:51.585092 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:28:51.587632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:28:51.589476 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:28:51.591118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:28:51.593480 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:28:51.595262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:28:51.597026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:28:51.598274 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:28:51.598306 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:28:51.599341 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:28:51.600967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:28:51.603590 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:28:51.626103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:28:51.629033 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:28:51.630373 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:28:51.631419 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:28:51.632692 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:28:51.632715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:28:51.642306 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:28:51.645323 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:28:51.651016 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:28:51.656094 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:28:51.660273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:28:51.661318 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:28:51.662957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:28:51.666639 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:28:51.679582 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:28:51.700002 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:28:51.706654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:28:51.712048 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:28:51.732129 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:28:51.734502 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:28:51.735384 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:28:51.759078 jq[1945]: false Jan 13 21:28:51.747063 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:28:51.758979 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:28:51.783387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:28:51.784239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:28:51.822725 update_engine[1954]: I20250113 21:28:51.820554 1954 main.cc:92] Flatcar Update Engine starting Jan 13 21:28:51.834894 jq[1955]: true Jan 13 21:28:51.828466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:28:51.829608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:28:51.872173 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:28:51.872442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:28:51.901565 (ntainerd)[1971]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:28:51.915798 dbus-daemon[1944]: [system] SELinux support is enabled Jan 13 21:28:51.924841 jq[1970]: true Jan 13 21:28:51.927497 update_engine[1954]: I20250113 21:28:51.924623 1954 update_check_scheduler.cc:74] Next update check in 10m50s Jan 13 21:28:51.927576 tar[1962]: linux-amd64/helm Jan 13 21:28:51.926039 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:28:51.931970 dbus-daemon[1944]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:28:51.950419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:28:51.950700 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:28:51.955059 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:28:51.955104 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:28:51.958273 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:28:51.957792 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:28:51.978271 extend-filesystems[1946]: Found loop4 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found loop5 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found loop6 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found loop7 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p1 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p2 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p3 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found usr Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p4 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p6 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p7 Jan 13 21:28:51.978271 extend-filesystems[1946]: Found nvme0n1p9 Jan 13 21:28:51.978271 extend-filesystems[1946]: Checking size of /dev/nvme0n1p9 Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: ---------------------------------------------------- Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: corporation. Support and training for ntp-4 are Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: available at https://www.nwtime.org/support Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:51 ntpd[1948]: ---------------------------------------------------- Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: proto: precision = 0.059 usec (-24) Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: basedate set to 2025-01-01 Jan 13 21:28:52.034873 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: gps base set to 2025-01-05 (week 2348) Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.018 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.024 INFO Fetch successful Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.032 INFO Fetch successful Jan 13 21:28:52.035690 coreos-metadata[1943]: Jan 13 21:28:52.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:28:51.982058 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:28:51.986191 ntpd[1948]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:28:52.036641 coreos-metadata[1943]: Jan 13 21:28:52.035 INFO Fetch successful Jan 13 21:28:52.036641 coreos-metadata[1943]: Jan 13 21:28:52.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:28:51.987097 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:28:51.986217 ntpd[1948]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:28:52.040245 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:28:52.040245 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:28:51.986227 ntpd[1948]: ---------------------------------------------------- Jan 13 21:28:51.986237 ntpd[1948]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:28:51.986247 ntpd[1948]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:28:51.986257 ntpd[1948]: corporation. Support and training for ntp-4 are Jan 13 21:28:51.986266 ntpd[1948]: available at https://www.nwtime.org/support Jan 13 21:28:51.986276 ntpd[1948]: ---------------------------------------------------- Jan 13 21:28:52.006843 ntpd[1948]: proto: precision = 0.059 usec (-24) Jan 13 21:28:52.017259 ntpd[1948]: basedate set to 2025-01-01 Jan 13 21:28:52.017287 ntpd[1948]: gps base set to 2025-01-05 (week 2348) Jan 13 21:28:52.038716 ntpd[1948]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listen normally on 3 eth0 172.31.18.253:123 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listen normally on 4 lo [::1]:123 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: bind(21) AF_INET6 fe80::455:e6ff:fec9:d76f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: unable to create socket on eth0 (5) for fe80::455:e6ff:fec9:d76f%2#123 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: failed to init interface for address fe80::455:e6ff:fec9:d76f%2 Jan 13 21:28:52.044942 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: Listening on routing socket on fd #21 for interface updates Jan 13 21:28:52.045312 coreos-metadata[1943]: Jan 13 21:28:52.041 INFO Fetch successful Jan 13 21:28:52.045312 coreos-metadata[1943]: Jan 13 21:28:52.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:28:52.038776 ntpd[1948]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:28:52.044155 ntpd[1948]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:28:52.044203 ntpd[1948]: Listen normally on 3 eth0 172.31.18.253:123 Jan 13 21:28:52.044245 ntpd[1948]: Listen normally on 4 lo [::1]:123 Jan 13 21:28:52.044296 ntpd[1948]: bind(21) AF_INET6 fe80::455:e6ff:fec9:d76f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:28:52.044317 ntpd[1948]: unable to create socket on eth0 (5) for fe80::455:e6ff:fec9:d76f%2#123 Jan 13 21:28:52.044333 ntpd[1948]: failed to init interface for address fe80::455:e6ff:fec9:d76f%2 Jan 13 21:28:52.044365 ntpd[1948]: Listening on routing socket on fd #21 for interface updates Jan 13 21:28:52.048691 ntpd[1948]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:28:52.052315 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:28:52.052427 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:28:52.052334 ntpd[1948]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:28:52.057842 coreos-metadata[1943]: Jan 13 21:28:52.053 INFO Fetch failed with 404: resource not found Jan 13 21:28:52.057842 coreos-metadata[1943]: Jan 13 21:28:52.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:28:52.061041 coreos-metadata[1943]: Jan 13 21:28:52.061 INFO Fetch successful Jan 13 21:28:52.061041 coreos-metadata[1943]: Jan 13 21:28:52.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:28:52.069829 coreos-metadata[1943]: Jan 13 21:28:52.068 INFO Fetch successful Jan 13 21:28:52.069829 coreos-metadata[1943]: Jan 13 21:28:52.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:28:52.071872 coreos-metadata[1943]: Jan 13 21:28:52.071 INFO Fetch successful Jan 13 21:28:52.071872 coreos-metadata[1943]: Jan 13 21:28:52.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:28:52.080072 coreos-metadata[1943]: Jan 13 21:28:52.080 INFO Fetch successful Jan 13 21:28:52.080170 coreos-metadata[1943]: Jan 13 21:28:52.080 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:28:52.082741 extend-filesystems[1946]: Resized partition /dev/nvme0n1p9 Jan 13 21:28:52.085459 coreos-metadata[1943]: Jan 13 21:28:52.085 INFO Fetch successful Jan 13 21:28:52.094930 systemd-logind[1953]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:28:52.095394 systemd-logind[1953]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:28:52.095424 systemd-logind[1953]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:28:52.095783 systemd-logind[1953]: New seat seat0. Jan 13 21:28:52.096755 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:28:52.100289 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:28:52.105259 extend-filesystems[2006]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:28:52.119830 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:28:52.139980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:28:52.243849 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:28:52.264103 extend-filesystems[2006]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:28:52.264103 extend-filesystems[2006]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:28:52.264103 extend-filesystems[2006]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:28:52.270062 extend-filesystems[1946]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:28:52.265995 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:28:52.266231 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:28:52.271747 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:28:52.276789 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:28:52.286240 bash[2024]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:28:52.291146 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:28:52.310651 systemd[1]: Starting sshkeys.service... Jan 13 21:28:52.346472 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:28:52.356037 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:28:52.403413 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1765) Jan 13 21:28:52.436292 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:28:52.438418 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:28:52.443046 dbus-daemon[1944]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1988 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:28:52.478959 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:28:52.522832 coreos-metadata[2038]: Jan 13 21:28:52.522 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:28:52.523562 coreos-metadata[2038]: Jan 13 21:28:52.523 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:28:52.524014 coreos-metadata[2038]: Jan 13 21:28:52.523 INFO Fetch successful Jan 13 21:28:52.524014 coreos-metadata[2038]: Jan 13 21:28:52.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:28:52.529092 coreos-metadata[2038]: Jan 13 21:28:52.529 INFO Fetch successful Jan 13 21:28:52.530921 unknown[2038]: wrote ssh authorized keys file for user: core Jan 13 21:28:52.561264 polkitd[2078]: Started polkitd version 121 Jan 13 21:28:52.604789 polkitd[2078]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:28:52.606901 polkitd[2078]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:28:52.607772 update-ssh-keys[2110]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:28:52.610924 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:28:52.612853 polkitd[2078]: Finished loading, compiling and executing 2 rules Jan 13 21:28:52.618325 systemd[1]: Finished sshkeys.service. Jan 13 21:28:52.628715 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:28:52.628921 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:28:52.635398 polkitd[2078]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:28:52.711149 sshd_keygen[1984]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:28:52.721213 locksmithd[1990]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:28:52.732222 systemd-resolved[1895]: System hostname changed to 'ip-172-31-18-253'. Jan 13 21:28:52.732304 systemd-hostnamed[1988]: Hostname set to (transient) Jan 13 21:28:52.799636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:28:52.810312 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:28:52.825223 systemd[1]: Started sshd@0-172.31.18.253:22-147.75.109.163:58370.service - OpenSSH per-connection server daemon (147.75.109.163:58370). Jan 13 21:28:52.835041 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:28:52.835384 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:28:52.849141 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:28:52.924378 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:28:52.935531 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:28:52.946350 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:28:52.950281 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:28:52.962670 containerd[1971]: time="2025-01-13T21:28:52.962524872Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:28:52.986704 ntpd[1948]: bind(24) AF_INET6 fe80::455:e6ff:fec9:d76f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:28:52.986756 ntpd[1948]: unable to create socket on eth0 (6) for fe80::455:e6ff:fec9:d76f%2#123 Jan 13 21:28:52.987135 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: bind(24) AF_INET6 fe80::455:e6ff:fec9:d76f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:28:52.987135 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: unable to create socket on eth0 (6) for fe80::455:e6ff:fec9:d76f%2#123 Jan 13 21:28:52.987135 ntpd[1948]: 13 Jan 21:28:52 ntpd[1948]: failed to init interface for address fe80::455:e6ff:fec9:d76f%2 Jan 13 21:28:52.986771 ntpd[1948]: failed to init interface for address fe80::455:e6ff:fec9:d76f%2 Jan 13 21:28:53.029859 containerd[1971]: time="2025-01-13T21:28:53.029496765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.032853895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.032905915Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.032929664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.033147324Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.033173077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.033243085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033467 containerd[1971]: time="2025-01-13T21:28:53.033261089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033790 containerd[1971]: time="2025-01-13T21:28:53.033531330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033790 containerd[1971]: time="2025-01-13T21:28:53.033555893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033790 containerd[1971]: time="2025-01-13T21:28:53.033575113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033790 containerd[1971]: time="2025-01-13T21:28:53.033594027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.033790 containerd[1971]: time="2025-01-13T21:28:53.033699121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.035214 containerd[1971]: time="2025-01-13T21:28:53.034132584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:28:53.035214 containerd[1971]: time="2025-01-13T21:28:53.034439612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:28:53.035214 containerd[1971]: time="2025-01-13T21:28:53.034465210Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:28:53.035214 containerd[1971]: time="2025-01-13T21:28:53.034572370Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:28:53.035214 containerd[1971]: time="2025-01-13T21:28:53.034630114Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:28:53.044793 containerd[1971]: time="2025-01-13T21:28:53.043658444Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:28:53.044793 containerd[1971]: time="2025-01-13T21:28:53.043740750Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:28:53.044793 containerd[1971]: time="2025-01-13T21:28:53.044699663Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:28:53.044793 containerd[1971]: time="2025-01-13T21:28:53.044740717Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:28:53.044793 containerd[1971]: time="2025-01-13T21:28:53.044789589Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:28:53.046392 containerd[1971]: time="2025-01-13T21:28:53.045373105Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:28:53.046392 containerd[1971]: time="2025-01-13T21:28:53.046230063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:28:53.046392 containerd[1971]: time="2025-01-13T21:28:53.046387154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046410588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046430384Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046452011Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046472719Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046492782Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046536 containerd[1971]: time="2025-01-13T21:28:53.046514624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046534786Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046554089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046573708Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046591702Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046622101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046642714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046662937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046683472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.046837 containerd[1971]: time="2025-01-13T21:28:53.046762569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.046797795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047391508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047433664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047452987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047468316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047480343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047567439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047581585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047598637Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047621229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047632400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047644905Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047752525Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:28:53.048434 containerd[1971]: time="2025-01-13T21:28:53.047778808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047877787Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047896902Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047909670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047922532Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047932117Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:28:53.049294 containerd[1971]: time="2025-01-13T21:28:53.047943427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.048256995Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.048349544Z" level=info msg="Connect containerd service" Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.049113100Z" level=info msg="using legacy CRI server" Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.049133567Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.049276975Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:28:53.051238 containerd[1971]: time="2025-01-13T21:28:53.050689475Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.051211146Z" level=info msg="Start subscribing containerd event" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.051527392Z" level=info msg="Start recovering state" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.051779810Z" level=info msg="Start event monitor" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.052688024Z" level=info msg="Start snapshots syncer" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.052704063Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:28:53.052880 containerd[1971]: time="2025-01-13T21:28:53.052714434Z" level=info msg="Start streaming server" Jan 13 21:28:53.058063 containerd[1971]: time="2025-01-13T21:28:53.052987097Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:28:53.058063 containerd[1971]: time="2025-01-13T21:28:53.053048278Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:28:53.058063 containerd[1971]: time="2025-01-13T21:28:53.053113965Z" level=info msg="containerd successfully booted in 0.092217s" Jan 13 21:28:53.053226 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:28:53.114901 sshd[2154]: Accepted publickey for core from 147.75.109.163 port 58370 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:28:53.119909 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:53.140645 systemd-logind[1953]: New session 1 of user core. Jan 13 21:28:53.143307 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:28:53.153317 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:28:53.186550 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:28:53.199212 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:28:53.208448 (systemd)[2169]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:28:53.294944 systemd-networkd[1894]: eth0: Gained IPv6LL Jan 13 21:28:53.299343 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:28:53.303795 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:28:53.316200 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:28:53.346477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:28:53.347155 tar[1962]: linux-amd64/LICENSE Jan 13 21:28:53.347676 tar[1962]: linux-amd64/README.md Jan 13 21:28:53.363514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:28:53.415013 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:28:53.483497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:28:53.487318 systemd[2169]: Queued start job for default target default.target. Jan 13 21:28:53.491826 amazon-ssm-agent[2176]: Initializing new seelog logger Jan 13 21:28:53.491826 amazon-ssm-agent[2176]: New Seelog Logger Creation Complete Jan 13 21:28:53.491826 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.491826 amazon-ssm-agent[2176]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.491826 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 processing appconfig overrides Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 processing appconfig overrides Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 processing appconfig overrides Jan 13 21:28:53.494338 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO Proxy environment variables: Jan 13 21:28:53.495749 systemd[2169]: Created slice app.slice - User Application Slice. Jan 13 21:28:53.495790 systemd[2169]: Reached target paths.target - Paths. Jan 13 21:28:53.495840 systemd[2169]: Reached target timers.target - Timers. Jan 13 21:28:53.498942 systemd[2169]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:28:53.500407 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.500407 amazon-ssm-agent[2176]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:28:53.500407 amazon-ssm-agent[2176]: 2025/01/13 21:28:53 processing appconfig overrides Jan 13 21:28:53.523404 systemd[2169]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:28:53.525122 systemd[2169]: Reached target sockets.target - Sockets. Jan 13 21:28:53.525158 systemd[2169]: Reached target basic.target - Basic System. Jan 13 21:28:53.525223 systemd[2169]: Reached target default.target - Main User Target. Jan 13 21:28:53.525265 systemd[2169]: Startup finished in 306ms. Jan 13 21:28:53.525434 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:28:53.535032 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:28:53.592881 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO http_proxy: Jan 13 21:28:53.690928 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO no_proxy: Jan 13 21:28:53.720786 systemd[1]: Started sshd@1-172.31.18.253:22-147.75.109.163:58378.service - OpenSSH per-connection server daemon (147.75.109.163:58378). Jan 13 21:28:53.789950 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO https_proxy: Jan 13 21:28:53.890307 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:28:53.923657 sshd[2201]: Accepted publickey for core from 147.75.109.163 port 58378 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:28:53.925945 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:53.965575 systemd-logind[1953]: New session 2 of user core. Jan 13 21:28:53.972032 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:28:53.989190 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO Agent will take identity from EC2 Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [Registrar] Starting registrar module Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:28:53.997284 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:28:54.087608 amazon-ssm-agent[2176]: 2025-01-13 21:28:53 INFO [CredentialRefresher] Next credential rotation will be in 31.09166110335 minutes Jan 13 21:28:54.106212 sshd[2201]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:54.109974 systemd[1]: sshd@1-172.31.18.253:22-147.75.109.163:58378.service: Deactivated successfully. Jan 13 21:28:54.111672 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:28:54.115259 systemd-logind[1953]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:28:54.117010 systemd-logind[1953]: Removed session 2. Jan 13 21:28:54.140459 systemd[1]: Started sshd@2-172.31.18.253:22-147.75.109.163:58386.service - OpenSSH per-connection server daemon (147.75.109.163:58386). Jan 13 21:28:54.316407 sshd[2209]: Accepted publickey for core from 147.75.109.163 port 58386 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:28:54.317817 sshd[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:54.324015 systemd-logind[1953]: New session 3 of user core. Jan 13 21:28:54.334071 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:28:54.455934 sshd[2209]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:54.461940 systemd[1]: sshd@2-172.31.18.253:22-147.75.109.163:58386.service: Deactivated successfully. Jan 13 21:28:54.464276 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:28:54.466180 systemd-logind[1953]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:28:54.467254 systemd-logind[1953]: Removed session 3. Jan 13 21:28:55.013848 amazon-ssm-agent[2176]: 2025-01-13 21:28:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:28:55.116036 amazon-ssm-agent[2176]: 2025-01-13 21:28:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2216) started Jan 13 21:28:55.217144 amazon-ssm-agent[2176]: 2025-01-13 21:28:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:28:55.718347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:28:55.721727 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:28:55.729963 systemd[1]: Startup finished in 923ms (kernel) + 8.231s (initrd) + 8.775s (userspace) = 17.930s. Jan 13 21:28:55.879716 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:28:55.986970 ntpd[1948]: Listen normally on 7 eth0 [fe80::455:e6ff:fec9:d76f%2]:123 Jan 13 21:28:55.987655 ntpd[1948]: 13 Jan 21:28:55 ntpd[1948]: Listen normally on 7 eth0 [fe80::455:e6ff:fec9:d76f%2]:123 Jan 13 21:28:57.138138 kubelet[2232]: E0113 21:28:57.138082 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:28:57.141141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:28:57.141357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:28:57.141746 systemd[1]: kubelet.service: Consumed 1.004s CPU time. Jan 13 21:29:00.079936 systemd-resolved[1895]: Clock change detected. Flushing caches. Jan 13 21:29:05.585098 systemd[1]: Started sshd@3-172.31.18.253:22-147.75.109.163:34346.service - OpenSSH per-connection server daemon (147.75.109.163:34346). Jan 13 21:29:05.772524 sshd[2244]: Accepted publickey for core from 147.75.109.163 port 34346 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:05.774474 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:05.788928 systemd-logind[1953]: New session 4 of user core. Jan 13 21:29:05.797910 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:29:05.932446 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:05.939199 systemd[1]: sshd@3-172.31.18.253:22-147.75.109.163:34346.service: Deactivated successfully. Jan 13 21:29:05.941851 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:29:05.943385 systemd-logind[1953]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:29:05.944950 systemd-logind[1953]: Removed session 4. Jan 13 21:29:05.971773 systemd[1]: Started sshd@4-172.31.18.253:22-147.75.109.163:34352.service - OpenSSH per-connection server daemon (147.75.109.163:34352). Jan 13 21:29:06.152630 sshd[2251]: Accepted publickey for core from 147.75.109.163 port 34352 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:06.154635 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:06.161682 systemd-logind[1953]: New session 5 of user core. Jan 13 21:29:06.168079 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:29:06.293988 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:06.298443 systemd[1]: sshd@4-172.31.18.253:22-147.75.109.163:34352.service: Deactivated successfully. Jan 13 21:29:06.300779 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:29:06.305560 systemd-logind[1953]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:29:06.306839 systemd-logind[1953]: Removed session 5. Jan 13 21:29:06.326722 systemd[1]: Started sshd@5-172.31.18.253:22-147.75.109.163:34366.service - OpenSSH per-connection server daemon (147.75.109.163:34366). Jan 13 21:29:06.518341 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 34366 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:06.520313 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:06.526482 systemd-logind[1953]: New session 6 of user core. Jan 13 21:29:06.537995 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:29:06.679287 sshd[2258]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:06.684593 systemd[1]: sshd@5-172.31.18.253:22-147.75.109.163:34366.service: Deactivated successfully. Jan 13 21:29:06.687232 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:29:06.689742 systemd-logind[1953]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:29:06.691256 systemd-logind[1953]: Removed session 6. Jan 13 21:29:06.715184 systemd[1]: Started sshd@6-172.31.18.253:22-147.75.109.163:34370.service - OpenSSH per-connection server daemon (147.75.109.163:34370). Jan 13 21:29:06.877595 sshd[2265]: Accepted publickey for core from 147.75.109.163 port 34370 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:06.880823 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:06.887309 systemd-logind[1953]: New session 7 of user core. Jan 13 21:29:06.894941 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:29:07.035107 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:29:07.035584 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:07.050863 sudo[2268]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:07.074620 sshd[2265]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:07.081929 systemd[1]: sshd@6-172.31.18.253:22-147.75.109.163:34370.service: Deactivated successfully. Jan 13 21:29:07.086913 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:29:07.089698 systemd-logind[1953]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:29:07.091404 systemd-logind[1953]: Removed session 7. Jan 13 21:29:07.110927 systemd[1]: Started sshd@7-172.31.18.253:22-147.75.109.163:34376.service - OpenSSH per-connection server daemon (147.75.109.163:34376). Jan 13 21:29:07.290613 sshd[2273]: Accepted publickey for core from 147.75.109.163 port 34376 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:07.292981 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:07.299010 systemd-logind[1953]: New session 8 of user core. Jan 13 21:29:07.309908 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:29:07.428504 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:29:07.429083 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:07.435562 sudo[2277]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:07.444223 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:29:07.444841 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:07.462245 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:07.465933 auditctl[2280]: No rules Jan 13 21:29:07.466314 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:29:07.466591 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:07.477005 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:07.525057 augenrules[2298]: No rules Jan 13 21:29:07.526728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:07.527872 sudo[2276]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:07.555173 sshd[2273]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:07.560740 systemd[1]: sshd@7-172.31.18.253:22-147.75.109.163:34376.service: Deactivated successfully. Jan 13 21:29:07.565692 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:29:07.567409 systemd-logind[1953]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:29:07.569089 systemd-logind[1953]: Removed session 8. Jan 13 21:29:07.595360 systemd[1]: Started sshd@8-172.31.18.253:22-147.75.109.163:50954.service - OpenSSH per-connection server daemon (147.75.109.163:50954). Jan 13 21:29:07.770698 sshd[2306]: Accepted publickey for core from 147.75.109.163 port 50954 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:29:07.772245 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:07.778832 systemd-logind[1953]: New session 9 of user core. Jan 13 21:29:07.786860 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:29:07.889528 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:29:07.889937 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:08.486931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:29:08.510438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:08.624519 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:29:08.624816 (dockerd)[2329]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:29:08.927918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:08.940131 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:09.010230 kubelet[2334]: E0113 21:29:09.010178 2334 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:09.022925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:09.023121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:09.330201 dockerd[2329]: time="2025-01-13T21:29:09.329624517Z" level=info msg="Starting up" Jan 13 21:29:09.500578 dockerd[2329]: time="2025-01-13T21:29:09.500495536Z" level=info msg="Loading containers: start." Jan 13 21:29:09.694678 kernel: Initializing XFRM netlink socket Jan 13 21:29:09.723196 (udev-worker)[2362]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:29:09.786172 systemd-networkd[1894]: docker0: Link UP Jan 13 21:29:09.804239 dockerd[2329]: time="2025-01-13T21:29:09.804189354Z" level=info msg="Loading containers: done." Jan 13 21:29:09.823312 dockerd[2329]: time="2025-01-13T21:29:09.823264072Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:29:09.823499 dockerd[2329]: time="2025-01-13T21:29:09.823416069Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:29:09.823593 dockerd[2329]: time="2025-01-13T21:29:09.823569773Z" level=info msg="Daemon has completed initialization" Jan 13 21:29:09.858558 dockerd[2329]: time="2025-01-13T21:29:09.858503127Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:29:09.858806 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:29:11.490686 containerd[1971]: time="2025-01-13T21:29:11.490501417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:29:12.193980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640031706.mount: Deactivated successfully. Jan 13 21:29:14.133172 containerd[1971]: time="2025-01-13T21:29:14.133036522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:14.135163 containerd[1971]: time="2025-01-13T21:29:14.134915817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 21:29:14.136873 containerd[1971]: time="2025-01-13T21:29:14.136330795Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:14.140283 containerd[1971]: time="2025-01-13T21:29:14.140235824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:14.141501 containerd[1971]: time="2025-01-13T21:29:14.141199121Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.650501371s" Jan 13 21:29:14.141501 containerd[1971]: time="2025-01-13T21:29:14.141243008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:29:14.145172 containerd[1971]: time="2025-01-13T21:29:14.145131096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:29:16.078224 containerd[1971]: time="2025-01-13T21:29:16.078173091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:16.079852 containerd[1971]: time="2025-01-13T21:29:16.079790101Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 21:29:16.080548 containerd[1971]: time="2025-01-13T21:29:16.080493352Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:16.085518 containerd[1971]: time="2025-01-13T21:29:16.084156060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:16.085518 containerd[1971]: time="2025-01-13T21:29:16.085365310Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.940192882s" Jan 13 21:29:16.085518 containerd[1971]: time="2025-01-13T21:29:16.085409392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:29:16.086495 containerd[1971]: time="2025-01-13T21:29:16.086463853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:29:17.718676 containerd[1971]: time="2025-01-13T21:29:17.718615977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:17.720315 containerd[1971]: time="2025-01-13T21:29:17.720237181Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 21:29:17.721686 containerd[1971]: time="2025-01-13T21:29:17.721034285Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:17.725878 containerd[1971]: time="2025-01-13T21:29:17.725833165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:17.729612 containerd[1971]: time="2025-01-13T21:29:17.727516996Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.641018587s" Jan 13 21:29:17.729612 containerd[1971]: time="2025-01-13T21:29:17.729609026Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:29:17.730457 containerd[1971]: time="2025-01-13T21:29:17.730403664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:29:19.172481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985305175.mount: Deactivated successfully. Jan 13 21:29:19.174250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:29:19.185402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:19.430921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:19.436426 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:19.511962 kubelet[2558]: E0113 21:29:19.511596 2558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:19.516822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:19.517080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:19.980657 containerd[1971]: time="2025-01-13T21:29:19.980554255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:19.982111 containerd[1971]: time="2025-01-13T21:29:19.981943434Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:29:19.984904 containerd[1971]: time="2025-01-13T21:29:19.983371791Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:19.987262 containerd[1971]: time="2025-01-13T21:29:19.985911591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:19.987262 containerd[1971]: time="2025-01-13T21:29:19.986944623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.256480479s" Jan 13 21:29:19.987262 containerd[1971]: time="2025-01-13T21:29:19.986980903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:29:19.987765 containerd[1971]: time="2025-01-13T21:29:19.987741547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:29:20.611585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249123193.mount: Deactivated successfully. Jan 13 21:29:21.902130 containerd[1971]: time="2025-01-13T21:29:21.902080419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:21.904965 containerd[1971]: time="2025-01-13T21:29:21.904899839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:29:21.910055 containerd[1971]: time="2025-01-13T21:29:21.909981035Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:21.915235 containerd[1971]: time="2025-01-13T21:29:21.914822250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:21.916391 containerd[1971]: time="2025-01-13T21:29:21.916355593Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.928502542s" Jan 13 21:29:21.916490 containerd[1971]: time="2025-01-13T21:29:21.916392648Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:29:21.917327 containerd[1971]: time="2025-01-13T21:29:21.917298161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:29:22.446899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977770798.mount: Deactivated successfully. Jan 13 21:29:22.454455 containerd[1971]: time="2025-01-13T21:29:22.454407974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:22.455624 containerd[1971]: time="2025-01-13T21:29:22.455573083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 21:29:22.458121 containerd[1971]: time="2025-01-13T21:29:22.456437970Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:22.460363 containerd[1971]: time="2025-01-13T21:29:22.459332813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:22.460363 containerd[1971]: time="2025-01-13T21:29:22.460220094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 542.886487ms" Jan 13 21:29:22.460363 containerd[1971]: time="2025-01-13T21:29:22.460257136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:29:22.461437 containerd[1971]: time="2025-01-13T21:29:22.461406963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:29:23.097198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400528113.mount: Deactivated successfully. Jan 13 21:29:23.831565 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:29:26.147096 containerd[1971]: time="2025-01-13T21:29:26.147041539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:26.150221 containerd[1971]: time="2025-01-13T21:29:26.150118845Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 21:29:26.152358 containerd[1971]: time="2025-01-13T21:29:26.152320319Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:26.156489 containerd[1971]: time="2025-01-13T21:29:26.156419568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:26.160692 containerd[1971]: time="2025-01-13T21:29:26.159225221Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.697779342s" Jan 13 21:29:26.160692 containerd[1971]: time="2025-01-13T21:29:26.159278233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:29:29.598350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:29:29.610025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:29.628803 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:29:29.628913 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:29:29.629215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:29.637107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:29.687407 systemd[1]: Reloading requested from client PID 2700 ('systemctl') (unit session-9.scope)... Jan 13 21:29:29.687610 systemd[1]: Reloading... Jan 13 21:29:29.837907 zram_generator::config[2740]: No configuration found. Jan 13 21:29:30.065004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:30.207344 systemd[1]: Reloading finished in 519 ms. Jan 13 21:29:30.265838 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:29:30.265959 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:29:30.266491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:30.274108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:30.493448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:30.508168 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:29:30.593224 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:30.593224 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:29:30.593224 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:30.595185 kubelet[2800]: I0113 21:29:30.595063 2800 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:29:31.285487 kubelet[2800]: I0113 21:29:31.285437 2800 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:29:31.285487 kubelet[2800]: I0113 21:29:31.285471 2800 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:29:31.285844 kubelet[2800]: I0113 21:29:31.285821 2800 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:29:31.327266 kubelet[2800]: I0113 21:29:31.327229 2800 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:31.329773 kubelet[2800]: E0113 21:29:31.329711 2800 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:31.343208 kubelet[2800]: E0113 21:29:31.343163 2800 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:29:31.343208 kubelet[2800]: I0113 21:29:31.343199 2800 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:29:31.349129 kubelet[2800]: I0113 21:29:31.349100 2800 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:29:31.351452 kubelet[2800]: I0113 21:29:31.351414 2800 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:29:31.351700 kubelet[2800]: I0113 21:29:31.351607 2800 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:29:31.351912 kubelet[2800]: I0113 21:29:31.351708 2800 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:29:31.352046 kubelet[2800]: I0113 21:29:31.351918 2800 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:29:31.352046 kubelet[2800]: I0113 21:29:31.351933 2800 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:29:31.352135 kubelet[2800]: I0113 21:29:31.352068 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:31.355580 kubelet[2800]: I0113 21:29:31.354979 2800 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:29:31.355580 kubelet[2800]: I0113 21:29:31.355016 2800 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:29:31.355580 kubelet[2800]: I0113 21:29:31.355059 2800 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:29:31.355580 kubelet[2800]: I0113 21:29:31.355073 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:29:31.370532 kubelet[2800]: W0113 21:29:31.370467 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-253&limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:31.370717 kubelet[2800]: E0113 21:29:31.370547 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-253&limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:31.370717 kubelet[2800]: W0113 21:29:31.370662 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:31.370717 kubelet[2800]: E0113 21:29:31.370706 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:31.371875 kubelet[2800]: I0113 21:29:31.371722 2800 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:29:31.374282 kubelet[2800]: I0113 21:29:31.374245 2800 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:29:31.376733 kubelet[2800]: W0113 21:29:31.375432 2800 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:29:31.381672 kubelet[2800]: I0113 21:29:31.381325 2800 server.go:1269] "Started kubelet" Jan 13 21:29:31.384057 kubelet[2800]: I0113 21:29:31.384022 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:29:31.386159 kubelet[2800]: I0113 21:29:31.385482 2800 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:29:31.388538 kubelet[2800]: I0113 21:29:31.387027 2800 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:29:31.388538 kubelet[2800]: I0113 21:29:31.388279 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:29:31.388882 kubelet[2800]: I0113 21:29:31.388866 2800 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:29:31.394013 kubelet[2800]: I0113 21:29:31.393983 2800 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:29:31.394629 kubelet[2800]: E0113 21:29:31.394600 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-253\" not found" Jan 13 21:29:31.398598 kubelet[2800]: I0113 21:29:31.398568 2800 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:29:31.398852 kubelet[2800]: I0113 21:29:31.398729 2800 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:29:31.405462 kubelet[2800]: I0113 21:29:31.404963 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:29:31.405936 kubelet[2800]: E0113 21:29:31.405893 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": dial tcp 172.31.18.253:6443: connect: connection refused" interval="200ms" Jan 13 21:29:31.411495 kubelet[2800]: I0113 21:29:31.411461 2800 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:29:31.412858 kubelet[2800]: I0113 21:29:31.412722 2800 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:29:31.419227 kubelet[2800]: W0113 21:29:31.419010 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:31.419227 kubelet[2800]: E0113 21:29:31.419156 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:31.425033 kubelet[2800]: E0113 21:29:31.413282 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.253:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-253.181a5dd3fdbd9c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-253,UID:ip-172-31-18-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-253,},FirstTimestamp:2025-01-13 21:29:31.38129204 +0000 UTC m=+0.868212667,LastTimestamp:2025-01-13 21:29:31.38129204 +0000 UTC m=+0.868212667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-253,}" Jan 13 21:29:31.425499 kubelet[2800]: I0113 21:29:31.425246 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:29:31.430807 kubelet[2800]: I0113 21:29:31.430775 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:29:31.431085 kubelet[2800]: I0113 21:29:31.431072 2800 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:29:31.431287 kubelet[2800]: I0113 21:29:31.431276 2800 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:29:31.431467 kubelet[2800]: E0113 21:29:31.431418 2800 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:29:31.438568 kubelet[2800]: W0113 21:29:31.438488 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:31.439079 kubelet[2800]: E0113 21:29:31.439047 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:31.441534 kubelet[2800]: I0113 21:29:31.441475 2800 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:29:31.467027 kubelet[2800]: I0113 21:29:31.466975 2800 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:29:31.467027 kubelet[2800]: I0113 21:29:31.467030 2800 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:29:31.470989 kubelet[2800]: I0113 21:29:31.467054 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:31.472328 kubelet[2800]: I0113 21:29:31.472302 2800 policy_none.go:49] "None policy: Start" Jan 13 21:29:31.473668 kubelet[2800]: I0113 21:29:31.473651 2800 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:29:31.474023 kubelet[2800]: I0113 21:29:31.473681 2800 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:29:31.481651 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:29:31.493413 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:29:31.494797 kubelet[2800]: E0113 21:29:31.494775 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-253\" not found" Jan 13 21:29:31.498573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:29:31.507369 kubelet[2800]: I0113 21:29:31.506631 2800 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:29:31.507369 kubelet[2800]: I0113 21:29:31.507004 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:29:31.507369 kubelet[2800]: I0113 21:29:31.507019 2800 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:29:31.507369 kubelet[2800]: I0113 21:29:31.507263 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:29:31.510089 kubelet[2800]: E0113 21:29:31.510064 2800 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-253\" not found" Jan 13 21:29:31.549122 systemd[1]: Created slice kubepods-burstable-poda09f2db99f5dd4fd6425fd77074bdcf3.slice - libcontainer container kubepods-burstable-poda09f2db99f5dd4fd6425fd77074bdcf3.slice. Jan 13 21:29:31.568991 systemd[1]: Created slice kubepods-burstable-pod61056649c26880237568717b66f67561.slice - libcontainer container kubepods-burstable-pod61056649c26880237568717b66f67561.slice. Jan 13 21:29:31.584149 systemd[1]: Created slice kubepods-burstable-pod43b12b4f87903baf65a80f03fdbef97f.slice - libcontainer container kubepods-burstable-pod43b12b4f87903baf65a80f03fdbef97f.slice. Jan 13 21:29:31.599749 kubelet[2800]: I0113 21:29:31.599701 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:31.599749 kubelet[2800]: I0113 21:29:31.599741 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43b12b4f87903baf65a80f03fdbef97f-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-253\" (UID: \"43b12b4f87903baf65a80f03fdbef97f\") " pod="kube-system/kube-scheduler-ip-172-31-18-253" Jan 13 21:29:31.600415 kubelet[2800]: I0113 21:29:31.599762 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:31.600415 kubelet[2800]: I0113 21:29:31.599784 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:31.600415 kubelet[2800]: I0113 21:29:31.599803 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-ca-certs\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:31.600415 kubelet[2800]: I0113 21:29:31.599822 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:31.600415 kubelet[2800]: I0113 21:29:31.599845 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:31.600598 kubelet[2800]: I0113 21:29:31.599936 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:31.600598 kubelet[2800]: I0113 21:29:31.599965 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:31.607191 kubelet[2800]: E0113 21:29:31.607143 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": dial tcp 172.31.18.253:6443: connect: connection refused" interval="400ms" Jan 13 21:29:31.612527 kubelet[2800]: I0113 21:29:31.612366 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:31.612913 kubelet[2800]: E0113 21:29:31.612801 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.253:6443/api/v1/nodes\": dial tcp 172.31.18.253:6443: connect: connection refused" node="ip-172-31-18-253" Jan 13 21:29:31.815163 kubelet[2800]: I0113 21:29:31.815057 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:31.816124 kubelet[2800]: E0113 21:29:31.815406 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.253:6443/api/v1/nodes\": dial tcp 172.31.18.253:6443: connect: connection refused" node="ip-172-31-18-253" Jan 13 21:29:31.867177 containerd[1971]: time="2025-01-13T21:29:31.867129773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-253,Uid:a09f2db99f5dd4fd6425fd77074bdcf3,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:31.890805 containerd[1971]: time="2025-01-13T21:29:31.890759272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-253,Uid:61056649c26880237568717b66f67561,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:31.891116 containerd[1971]: time="2025-01-13T21:29:31.890759249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-253,Uid:43b12b4f87903baf65a80f03fdbef97f,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:32.008591 kubelet[2800]: E0113 21:29:32.008542 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": dial tcp 172.31.18.253:6443: connect: connection refused" interval="800ms" Jan 13 21:29:32.217302 kubelet[2800]: I0113 21:29:32.217271 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:32.217654 kubelet[2800]: E0113 21:29:32.217609 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.253:6443/api/v1/nodes\": dial tcp 172.31.18.253:6443: connect: connection refused" node="ip-172-31-18-253" Jan 13 21:29:32.372476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198281695.mount: Deactivated successfully. Jan 13 21:29:32.387948 containerd[1971]: time="2025-01-13T21:29:32.387893963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:32.390043 containerd[1971]: time="2025-01-13T21:29:32.389989248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:29:32.391931 containerd[1971]: time="2025-01-13T21:29:32.391890360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:32.393948 containerd[1971]: time="2025-01-13T21:29:32.393909912Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:32.395760 containerd[1971]: time="2025-01-13T21:29:32.395704008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:29:32.398214 containerd[1971]: time="2025-01-13T21:29:32.398174462Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:32.399803 containerd[1971]: time="2025-01-13T21:29:32.399486028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:29:32.403439 containerd[1971]: time="2025-01-13T21:29:32.403404437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:32.404617 containerd[1971]: time="2025-01-13T21:29:32.404451724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.572991ms" Jan 13 21:29:32.410228 containerd[1971]: time="2025-01-13T21:29:32.410094308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.061273ms" Jan 13 21:29:32.411500 containerd[1971]: time="2025-01-13T21:29:32.411456790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.234289ms" Jan 13 21:29:32.429930 kubelet[2800]: W0113 21:29:32.429859 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:32.430065 kubelet[2800]: E0113 21:29:32.429935 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:32.450666 kubelet[2800]: W0113 21:29:32.449280 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-253&limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:32.450666 kubelet[2800]: E0113 21:29:32.449363 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-253&limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:32.647072 kubelet[2800]: W0113 21:29:32.646947 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:32.648140 kubelet[2800]: E0113 21:29:32.648112 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:32.758887 containerd[1971]: time="2025-01-13T21:29:32.758724777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:32.759442 containerd[1971]: time="2025-01-13T21:29:32.759390230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:32.759633 containerd[1971]: time="2025-01-13T21:29:32.759604326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.760321 containerd[1971]: time="2025-01-13T21:29:32.760279681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.765565 containerd[1971]: time="2025-01-13T21:29:32.765436427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:32.766173 containerd[1971]: time="2025-01-13T21:29:32.765724752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:32.766173 containerd[1971]: time="2025-01-13T21:29:32.765761336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.766173 containerd[1971]: time="2025-01-13T21:29:32.765913700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.786809 containerd[1971]: time="2025-01-13T21:29:32.785616247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:32.786809 containerd[1971]: time="2025-01-13T21:29:32.785718217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:32.786809 containerd[1971]: time="2025-01-13T21:29:32.785813790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.786809 containerd[1971]: time="2025-01-13T21:29:32.785923592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:32.811512 kubelet[2800]: E0113 21:29:32.811461 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": dial tcp 172.31.18.253:6443: connect: connection refused" interval="1.6s" Jan 13 21:29:32.818961 systemd[1]: Started cri-containerd-b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a.scope - libcontainer container b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a. Jan 13 21:29:32.830910 systemd[1]: Started cri-containerd-1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7.scope - libcontainer container 1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7. Jan 13 21:29:32.833663 systemd[1]: Started cri-containerd-6eac96301a3efc089a647ae438ae35e337a04de64c7e7a64f2f39eb6db66bc1f.scope - libcontainer container 6eac96301a3efc089a647ae438ae35e337a04de64c7e7a64f2f39eb6db66bc1f. Jan 13 21:29:32.924865 containerd[1971]: time="2025-01-13T21:29:32.923553866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-253,Uid:a09f2db99f5dd4fd6425fd77074bdcf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eac96301a3efc089a647ae438ae35e337a04de64c7e7a64f2f39eb6db66bc1f\"" Jan 13 21:29:32.933002 containerd[1971]: time="2025-01-13T21:29:32.932775042Z" level=info msg="CreateContainer within sandbox \"6eac96301a3efc089a647ae438ae35e337a04de64c7e7a64f2f39eb6db66bc1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:29:32.974105 containerd[1971]: time="2025-01-13T21:29:32.973832462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-253,Uid:43b12b4f87903baf65a80f03fdbef97f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7\"" Jan 13 21:29:32.990537 containerd[1971]: time="2025-01-13T21:29:32.990275837Z" level=info msg="CreateContainer within sandbox \"1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:29:32.994962 kubelet[2800]: W0113 21:29:32.994858 2800 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.253:6443: connect: connection refused Jan 13 21:29:32.995321 kubelet[2800]: E0113 21:29:32.995147 2800 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:32.998300 containerd[1971]: time="2025-01-13T21:29:32.998212495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-253,Uid:61056649c26880237568717b66f67561,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a\"" Jan 13 21:29:33.010541 containerd[1971]: time="2025-01-13T21:29:33.010450568Z" level=info msg="CreateContainer within sandbox \"b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:29:33.018453 containerd[1971]: time="2025-01-13T21:29:33.017950976Z" level=info msg="CreateContainer within sandbox \"6eac96301a3efc089a647ae438ae35e337a04de64c7e7a64f2f39eb6db66bc1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"800a2bde5650c93d913ebacc532c446fcd86a8e5335056419243b56d14640b7e\"" Jan 13 21:29:33.020692 containerd[1971]: time="2025-01-13T21:29:33.020392263Z" level=info msg="StartContainer for \"800a2bde5650c93d913ebacc532c446fcd86a8e5335056419243b56d14640b7e\"" Jan 13 21:29:33.021924 kubelet[2800]: I0113 21:29:33.021898 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:33.022809 kubelet[2800]: E0113 21:29:33.022776 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.253:6443/api/v1/nodes\": dial tcp 172.31.18.253:6443: connect: connection refused" node="ip-172-31-18-253" Jan 13 21:29:33.041061 containerd[1971]: time="2025-01-13T21:29:33.040982881Z" level=info msg="CreateContainer within sandbox \"1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456\"" Jan 13 21:29:33.042696 containerd[1971]: time="2025-01-13T21:29:33.042634789Z" level=info msg="StartContainer for \"1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456\"" Jan 13 21:29:33.073215 containerd[1971]: time="2025-01-13T21:29:33.073134750Z" level=info msg="CreateContainer within sandbox \"b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7\"" Jan 13 21:29:33.074529 containerd[1971]: time="2025-01-13T21:29:33.074494856Z" level=info msg="StartContainer for \"d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7\"" Jan 13 21:29:33.105976 systemd[1]: Started cri-containerd-800a2bde5650c93d913ebacc532c446fcd86a8e5335056419243b56d14640b7e.scope - libcontainer container 800a2bde5650c93d913ebacc532c446fcd86a8e5335056419243b56d14640b7e. Jan 13 21:29:33.127967 systemd[1]: Started cri-containerd-1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456.scope - libcontainer container 1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456. Jan 13 21:29:33.209463 systemd[1]: Started cri-containerd-d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7.scope - libcontainer container d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7. Jan 13 21:29:33.313829 containerd[1971]: time="2025-01-13T21:29:33.313647639Z" level=info msg="StartContainer for \"800a2bde5650c93d913ebacc532c446fcd86a8e5335056419243b56d14640b7e\" returns successfully" Jan 13 21:29:33.314163 containerd[1971]: time="2025-01-13T21:29:33.313802631Z" level=info msg="StartContainer for \"1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456\" returns successfully" Jan 13 21:29:33.379769 containerd[1971]: time="2025-01-13T21:29:33.373927726Z" level=info msg="StartContainer for \"d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7\" returns successfully" Jan 13 21:29:33.503331 kubelet[2800]: E0113 21:29:33.502679 2800 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.253:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:29:34.413287 kubelet[2800]: E0113 21:29:34.413221 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": dial tcp 172.31.18.253:6443: connect: connection refused" interval="3.2s" Jan 13 21:29:34.627658 kubelet[2800]: I0113 21:29:34.627204 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:34.627658 kubelet[2800]: E0113 21:29:34.627554 2800 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.253:6443/api/v1/nodes\": dial tcp 172.31.18.253:6443: connect: connection refused" node="ip-172-31-18-253" Jan 13 21:29:37.058892 kubelet[2800]: E0113 21:29:37.057405 2800 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-253.181a5dd3fdbd9c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-253,UID:ip-172-31-18-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-253,},FirstTimestamp:2025-01-13 21:29:31.38129204 +0000 UTC m=+0.868212667,LastTimestamp:2025-01-13 21:29:31.38129204 +0000 UTC m=+0.868212667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-253,}" Jan 13 21:29:37.125553 kubelet[2800]: E0113 21:29:37.122094 2800 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-253.181a5dd402c87c55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-253,UID:ip-172-31-18-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-18-253 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-18-253,},FirstTimestamp:2025-01-13 21:29:31.465890901 +0000 UTC m=+0.952811509,LastTimestamp:2025-01-13 21:29:31.465890901 +0000 UTC m=+0.952811509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-253,}" Jan 13 21:29:37.184459 kubelet[2800]: E0113 21:29:37.183994 2800 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-253.181a5dd402c88c25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-253,UID:ip-172-31-18-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-18-253 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-18-253,},FirstTimestamp:2025-01-13 21:29:31.465894949 +0000 UTC m=+0.952815554,LastTimestamp:2025-01-13 21:29:31.465894949 +0000 UTC m=+0.952815554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-253,}" Jan 13 21:29:37.289522 kubelet[2800]: E0113 21:29:37.289394 2800 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-253" not found Jan 13 21:29:37.376407 kubelet[2800]: I0113 21:29:37.375175 2800 apiserver.go:52] "Watching apiserver" Jan 13 21:29:37.399164 kubelet[2800]: I0113 21:29:37.399099 2800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:29:37.618507 kubelet[2800]: E0113 21:29:37.618472 2800 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-253\" not found" node="ip-172-31-18-253" Jan 13 21:29:37.664409 kubelet[2800]: E0113 21:29:37.664272 2800 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-253" not found Jan 13 21:29:37.835240 kubelet[2800]: I0113 21:29:37.834582 2800 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:37.849269 kubelet[2800]: I0113 21:29:37.849229 2800 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-253" Jan 13 21:29:37.897824 update_engine[1954]: I20250113 21:29:37.897735 1954 update_attempter.cc:509] Updating boot flags... Jan 13 21:29:37.998670 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3084) Jan 13 21:29:38.390668 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3087) Jan 13 21:29:39.511105 systemd[1]: Reloading requested from client PID 3253 ('systemctl') (unit session-9.scope)... Jan 13 21:29:39.511388 systemd[1]: Reloading... Jan 13 21:29:39.750871 zram_generator::config[3290]: No configuration found. Jan 13 21:29:39.953439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:40.085729 systemd[1]: Reloading finished in 573 ms. Jan 13 21:29:40.169913 kubelet[2800]: I0113 21:29:40.169774 2800 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:40.170340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:40.186199 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:29:40.186521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:40.186691 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 115.2M memory peak, 0B memory swap peak. Jan 13 21:29:40.194170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:40.505510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:40.521169 (kubelet)[3350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:29:40.631748 kubelet[3350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:40.631748 kubelet[3350]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:29:40.631748 kubelet[3350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:40.632446 kubelet[3350]: I0113 21:29:40.631947 3350 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:29:40.652106 kubelet[3350]: I0113 21:29:40.652064 3350 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:29:40.652106 kubelet[3350]: I0113 21:29:40.652105 3350 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:29:40.653128 kubelet[3350]: I0113 21:29:40.652667 3350 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:29:40.656471 kubelet[3350]: I0113 21:29:40.656096 3350 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:29:40.678495 kubelet[3350]: I0113 21:29:40.678422 3350 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:40.684545 kubelet[3350]: E0113 21:29:40.683873 3350 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:29:40.684545 kubelet[3350]: I0113 21:29:40.683911 3350 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:29:40.688122 kubelet[3350]: I0113 21:29:40.688092 3350 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:29:40.688372 kubelet[3350]: I0113 21:29:40.688271 3350 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:29:40.688561 kubelet[3350]: I0113 21:29:40.688523 3350 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:29:40.688832 kubelet[3350]: I0113 21:29:40.688558 3350 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:29:40.689054 kubelet[3350]: I0113 21:29:40.688839 3350 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:29:40.689054 kubelet[3350]: I0113 21:29:40.688855 3350 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:29:40.690389 kubelet[3350]: I0113 21:29:40.690358 3350 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:40.692709 kubelet[3350]: I0113 21:29:40.690798 3350 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:29:40.692709 kubelet[3350]: I0113 21:29:40.690822 3350 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:29:40.692709 kubelet[3350]: I0113 21:29:40.690859 3350 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:29:40.692709 kubelet[3350]: I0113 21:29:40.690874 3350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:29:40.692709 kubelet[3350]: I0113 21:29:40.692368 3350 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:29:40.693152 kubelet[3350]: I0113 21:29:40.693131 3350 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:29:40.693656 kubelet[3350]: I0113 21:29:40.693626 3350 server.go:1269] "Started kubelet" Jan 13 21:29:40.700119 kubelet[3350]: I0113 21:29:40.699862 3350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:29:40.706247 kubelet[3350]: I0113 21:29:40.706198 3350 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:29:40.709879 kubelet[3350]: I0113 21:29:40.708246 3350 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:29:40.736569 kubelet[3350]: I0113 21:29:40.709916 3350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:29:40.736997 kubelet[3350]: I0113 21:29:40.736976 3350 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:29:40.737201 kubelet[3350]: I0113 21:29:40.714038 3350 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:29:40.749879 kubelet[3350]: I0113 21:29:40.710571 3350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:29:40.755665 kubelet[3350]: E0113 21:29:40.715543 3350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-253\" not found" Jan 13 21:29:40.763761 kubelet[3350]: I0113 21:29:40.714060 3350 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:29:40.763761 kubelet[3350]: I0113 21:29:40.750353 3350 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:29:40.770666 kubelet[3350]: E0113 21:29:40.769410 3350 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:29:40.770841 kubelet[3350]: I0113 21:29:40.769741 3350 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:29:40.770923 kubelet[3350]: I0113 21:29:40.770913 3350 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:29:40.773006 kubelet[3350]: I0113 21:29:40.771057 3350 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:29:40.780628 kubelet[3350]: I0113 21:29:40.775844 3350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:29:40.800791 kubelet[3350]: I0113 21:29:40.798350 3350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:29:40.801003 kubelet[3350]: I0113 21:29:40.800985 3350 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:29:40.801088 kubelet[3350]: I0113 21:29:40.801078 3350 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:29:40.801248 kubelet[3350]: E0113 21:29:40.801196 3350 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889023 3350 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889050 3350 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889075 3350 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889376 3350 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889391 3350 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:29:40.889795 kubelet[3350]: I0113 21:29:40.889418 3350 policy_none.go:49] "None policy: Start" Jan 13 21:29:40.897287 kubelet[3350]: I0113 21:29:40.897251 3350 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:29:40.897287 kubelet[3350]: I0113 21:29:40.897290 3350 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:29:40.904711 kubelet[3350]: I0113 21:29:40.902804 3350 state_mem.go:75] "Updated machine memory state" Jan 13 21:29:40.904711 kubelet[3350]: E0113 21:29:40.903573 3350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:29:40.916154 kubelet[3350]: I0113 21:29:40.916120 3350 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:29:40.916373 kubelet[3350]: I0113 21:29:40.916321 3350 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:29:40.916435 kubelet[3350]: I0113 21:29:40.916362 3350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:29:40.917493 kubelet[3350]: I0113 21:29:40.917424 3350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:29:41.038449 kubelet[3350]: I0113 21:29:41.038359 3350 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-253" Jan 13 21:29:41.050201 kubelet[3350]: I0113 21:29:41.049894 3350 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-253" Jan 13 21:29:41.050201 kubelet[3350]: I0113 21:29:41.049997 3350 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-253" Jan 13 21:29:41.115202 kubelet[3350]: E0113 21:29:41.115161 3350 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-253\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164478 kubelet[3350]: I0113 21:29:41.163774 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-ca-certs\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:41.164478 kubelet[3350]: I0113 21:29:41.163856 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:41.164478 kubelet[3350]: I0113 21:29:41.164027 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164478 kubelet[3350]: I0113 21:29:41.164068 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164478 kubelet[3350]: I0113 21:29:41.164096 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164787 kubelet[3350]: I0113 21:29:41.164118 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a09f2db99f5dd4fd6425fd77074bdcf3-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-253\" (UID: \"a09f2db99f5dd4fd6425fd77074bdcf3\") " pod="kube-system/kube-apiserver-ip-172-31-18-253" Jan 13 21:29:41.164787 kubelet[3350]: I0113 21:29:41.164139 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164787 kubelet[3350]: I0113 21:29:41.164162 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61056649c26880237568717b66f67561-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-253\" (UID: \"61056649c26880237568717b66f67561\") " pod="kube-system/kube-controller-manager-ip-172-31-18-253" Jan 13 21:29:41.164787 kubelet[3350]: I0113 21:29:41.164198 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43b12b4f87903baf65a80f03fdbef97f-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-253\" (UID: \"43b12b4f87903baf65a80f03fdbef97f\") " pod="kube-system/kube-scheduler-ip-172-31-18-253" Jan 13 21:29:41.703704 kubelet[3350]: I0113 21:29:41.701756 3350 apiserver.go:52] "Watching apiserver" Jan 13 21:29:41.762057 kubelet[3350]: I0113 21:29:41.761943 3350 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:29:41.866511 kubelet[3350]: I0113 21:29:41.866322 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-253" podStartSLOduration=0.866305276 podStartE2EDuration="866.305276ms" podCreationTimestamp="2025-01-13 21:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:41.866016806 +0000 UTC m=+1.335694278" watchObservedRunningTime="2025-01-13 21:29:41.866305276 +0000 UTC m=+1.335982739" Jan 13 21:29:41.894446 kubelet[3350]: I0113 21:29:41.893991 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-253" podStartSLOduration=2.893930228 podStartE2EDuration="2.893930228s" podCreationTimestamp="2025-01-13 21:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:41.893524008 +0000 UTC m=+1.363201497" watchObservedRunningTime="2025-01-13 21:29:41.893930228 +0000 UTC m=+1.363607699" Jan 13 21:29:41.894446 kubelet[3350]: I0113 21:29:41.894072 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-253" podStartSLOduration=0.894066238 podStartE2EDuration="894.066238ms" podCreationTimestamp="2025-01-13 21:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:41.879325488 +0000 UTC m=+1.349002959" watchObservedRunningTime="2025-01-13 21:29:41.894066238 +0000 UTC m=+1.363743713" Jan 13 21:29:44.031084 kubelet[3350]: I0113 21:29:44.031039 3350 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:29:44.031944 containerd[1971]: time="2025-01-13T21:29:44.031904390Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:29:44.032387 kubelet[3350]: I0113 21:29:44.032204 3350 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:29:45.115487 systemd[1]: Created slice kubepods-besteffort-pod2ed656ea_2938_4ba4_b238_3009091a508b.slice - libcontainer container kubepods-besteffort-pod2ed656ea_2938_4ba4_b238_3009091a508b.slice. Jan 13 21:29:45.267701 systemd[1]: Created slice kubepods-besteffort-podc097ac6f_3712_4308_809e_4c5cf2bcc52b.slice - libcontainer container kubepods-besteffort-podc097ac6f_3712_4308_809e_4c5cf2bcc52b.slice. Jan 13 21:29:45.291729 kubelet[3350]: I0113 21:29:45.291681 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ed656ea-2938-4ba4-b238-3009091a508b-xtables-lock\") pod \"kube-proxy-xc4gx\" (UID: \"2ed656ea-2938-4ba4-b238-3009091a508b\") " pod="kube-system/kube-proxy-xc4gx" Jan 13 21:29:45.291729 kubelet[3350]: I0113 21:29:45.291730 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ed656ea-2938-4ba4-b238-3009091a508b-kube-proxy\") pod \"kube-proxy-xc4gx\" (UID: \"2ed656ea-2938-4ba4-b238-3009091a508b\") " pod="kube-system/kube-proxy-xc4gx" Jan 13 21:29:45.293051 kubelet[3350]: I0113 21:29:45.291754 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ed656ea-2938-4ba4-b238-3009091a508b-lib-modules\") pod \"kube-proxy-xc4gx\" (UID: \"2ed656ea-2938-4ba4-b238-3009091a508b\") " pod="kube-system/kube-proxy-xc4gx" Jan 13 21:29:45.293051 kubelet[3350]: I0113 21:29:45.291791 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djg7\" (UniqueName: \"kubernetes.io/projected/2ed656ea-2938-4ba4-b238-3009091a508b-kube-api-access-8djg7\") pod \"kube-proxy-xc4gx\" (UID: \"2ed656ea-2938-4ba4-b238-3009091a508b\") " pod="kube-system/kube-proxy-xc4gx" Jan 13 21:29:45.392839 kubelet[3350]: I0113 21:29:45.392339 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cs9\" (UniqueName: \"kubernetes.io/projected/c097ac6f-3712-4308-809e-4c5cf2bcc52b-kube-api-access-q6cs9\") pod \"tigera-operator-76c4976dd7-ztt4j\" (UID: \"c097ac6f-3712-4308-809e-4c5cf2bcc52b\") " pod="tigera-operator/tigera-operator-76c4976dd7-ztt4j" Jan 13 21:29:45.392839 kubelet[3350]: I0113 21:29:45.392451 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c097ac6f-3712-4308-809e-4c5cf2bcc52b-var-lib-calico\") pod \"tigera-operator-76c4976dd7-ztt4j\" (UID: \"c097ac6f-3712-4308-809e-4c5cf2bcc52b\") " pod="tigera-operator/tigera-operator-76c4976dd7-ztt4j" Jan 13 21:29:45.437706 containerd[1971]: time="2025-01-13T21:29:45.433958814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4gx,Uid:2ed656ea-2938-4ba4-b238-3009091a508b,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:45.533480 containerd[1971]: time="2025-01-13T21:29:45.533343463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:45.534428 containerd[1971]: time="2025-01-13T21:29:45.533433064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:45.534428 containerd[1971]: time="2025-01-13T21:29:45.534332375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:45.536439 containerd[1971]: time="2025-01-13T21:29:45.535657688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:45.567498 systemd[1]: Started cri-containerd-8021a163f26f77e9139aa900c74f8f174937d690003b3bf4279899bff5de8380.scope - libcontainer container 8021a163f26f77e9139aa900c74f8f174937d690003b3bf4279899bff5de8380. Jan 13 21:29:45.582104 containerd[1971]: time="2025-01-13T21:29:45.582054474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-ztt4j,Uid:c097ac6f-3712-4308-809e-4c5cf2bcc52b,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:29:45.636436 containerd[1971]: time="2025-01-13T21:29:45.636376147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4gx,Uid:2ed656ea-2938-4ba4-b238-3009091a508b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8021a163f26f77e9139aa900c74f8f174937d690003b3bf4279899bff5de8380\"" Jan 13 21:29:45.646209 containerd[1971]: time="2025-01-13T21:29:45.646083153Z" level=info msg="CreateContainer within sandbox \"8021a163f26f77e9139aa900c74f8f174937d690003b3bf4279899bff5de8380\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:29:45.666504 containerd[1971]: time="2025-01-13T21:29:45.666230015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:45.666504 containerd[1971]: time="2025-01-13T21:29:45.666399580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:45.668472 containerd[1971]: time="2025-01-13T21:29:45.666504854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:45.668472 containerd[1971]: time="2025-01-13T21:29:45.666673420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:45.722097 containerd[1971]: time="2025-01-13T21:29:45.722047974Z" level=info msg="CreateContainer within sandbox \"8021a163f26f77e9139aa900c74f8f174937d690003b3bf4279899bff5de8380\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edfdf7061775b9e92abc2b2903c3ed57e858e265905d4a06de0e1e684f68a5f1\"" Jan 13 21:29:45.723885 systemd[1]: Started cri-containerd-589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9.scope - libcontainer container 589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9. Jan 13 21:29:45.727680 containerd[1971]: time="2025-01-13T21:29:45.727621196Z" level=info msg="StartContainer for \"edfdf7061775b9e92abc2b2903c3ed57e858e265905d4a06de0e1e684f68a5f1\"" Jan 13 21:29:45.821979 systemd[1]: Started cri-containerd-edfdf7061775b9e92abc2b2903c3ed57e858e265905d4a06de0e1e684f68a5f1.scope - libcontainer container edfdf7061775b9e92abc2b2903c3ed57e858e265905d4a06de0e1e684f68a5f1. Jan 13 21:29:45.872239 containerd[1971]: time="2025-01-13T21:29:45.872023159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-ztt4j,Uid:c097ac6f-3712-4308-809e-4c5cf2bcc52b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9\"" Jan 13 21:29:45.882811 containerd[1971]: time="2025-01-13T21:29:45.881666098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:29:45.975626 containerd[1971]: time="2025-01-13T21:29:45.975514977Z" level=info msg="StartContainer for \"edfdf7061775b9e92abc2b2903c3ed57e858e265905d4a06de0e1e684f68a5f1\" returns successfully" Jan 13 21:29:46.424088 sudo[2309]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:46.450134 sshd[2306]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:46.457153 systemd-logind[1953]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:29:46.459261 systemd[1]: sshd@8-172.31.18.253:22-147.75.109.163:50954.service: Deactivated successfully. Jan 13 21:29:46.463777 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:29:46.464005 systemd[1]: session-9.scope: Consumed 4.913s CPU time, 142.1M memory peak, 0B memory swap peak. Jan 13 21:29:46.465080 systemd-logind[1953]: Removed session 9. Jan 13 21:29:46.906845 kubelet[3350]: I0113 21:29:46.906781 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xc4gx" podStartSLOduration=1.906758833 podStartE2EDuration="1.906758833s" podCreationTimestamp="2025-01-13 21:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:29:46.906308003 +0000 UTC m=+6.375985476" watchObservedRunningTime="2025-01-13 21:29:46.906758833 +0000 UTC m=+6.376436305" Jan 13 21:29:47.325548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954162412.mount: Deactivated successfully. Jan 13 21:29:48.415557 containerd[1971]: time="2025-01-13T21:29:48.415505400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:48.417344 containerd[1971]: time="2025-01-13T21:29:48.417177113Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764293" Jan 13 21:29:48.419832 containerd[1971]: time="2025-01-13T21:29:48.418821851Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:48.421391 containerd[1971]: time="2025-01-13T21:29:48.421355644Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:48.422167 containerd[1971]: time="2025-01-13T21:29:48.422116686Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.540404552s" Jan 13 21:29:48.422308 containerd[1971]: time="2025-01-13T21:29:48.422163771Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:29:48.453489 containerd[1971]: time="2025-01-13T21:29:48.453447573Z" level=info msg="CreateContainer within sandbox \"589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:29:48.471184 containerd[1971]: time="2025-01-13T21:29:48.471130999Z" level=info msg="CreateContainer within sandbox \"589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7\"" Jan 13 21:29:48.471692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462777918.mount: Deactivated successfully. Jan 13 21:29:48.474391 containerd[1971]: time="2025-01-13T21:29:48.474307578Z" level=info msg="StartContainer for \"44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7\"" Jan 13 21:29:48.518957 systemd[1]: Started cri-containerd-44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7.scope - libcontainer container 44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7. Jan 13 21:29:48.574667 containerd[1971]: time="2025-01-13T21:29:48.574510387Z" level=info msg="StartContainer for \"44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7\" returns successfully" Jan 13 21:29:51.905975 kubelet[3350]: I0113 21:29:51.905892 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-ztt4j" podStartSLOduration=4.341718999 podStartE2EDuration="6.905870022s" podCreationTimestamp="2025-01-13 21:29:45 +0000 UTC" firstStartedPulling="2025-01-13 21:29:45.881118776 +0000 UTC m=+5.350796239" lastFinishedPulling="2025-01-13 21:29:48.4452698 +0000 UTC m=+7.914947262" observedRunningTime="2025-01-13 21:29:48.898306446 +0000 UTC m=+8.367983915" watchObservedRunningTime="2025-01-13 21:29:51.905870022 +0000 UTC m=+11.375547494" Jan 13 21:29:51.923762 systemd[1]: Created slice kubepods-besteffort-podcac0d9f0_2f5d_4f9a_9fb2_44ea3d420d65.slice - libcontainer container kubepods-besteffort-podcac0d9f0_2f5d_4f9a_9fb2_44ea3d420d65.slice. Jan 13 21:29:51.943282 kubelet[3350]: I0113 21:29:51.942956 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r74z\" (UniqueName: \"kubernetes.io/projected/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-kube-api-access-6r74z\") pod \"calico-typha-77b86fb5df-f9x6m\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " pod="calico-system/calico-typha-77b86fb5df-f9x6m" Jan 13 21:29:51.943691 kubelet[3350]: I0113 21:29:51.943368 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-typha-certs\") pod \"calico-typha-77b86fb5df-f9x6m\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " pod="calico-system/calico-typha-77b86fb5df-f9x6m" Jan 13 21:29:51.943788 kubelet[3350]: I0113 21:29:51.943729 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-tigera-ca-bundle\") pod \"calico-typha-77b86fb5df-f9x6m\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " pod="calico-system/calico-typha-77b86fb5df-f9x6m" Jan 13 21:29:52.145337 kubelet[3350]: I0113 21:29:52.145208 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-xtables-lock\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145337 kubelet[3350]: I0113 21:29:52.145257 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-lib-calico\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145337 kubelet[3350]: I0113 21:29:52.145289 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58gr8\" (UniqueName: \"kubernetes.io/projected/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-kube-api-access-58gr8\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145337 kubelet[3350]: I0113 21:29:52.145315 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-node-certs\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145337 kubelet[3350]: I0113 21:29:52.145336 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-policysync\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145677 kubelet[3350]: I0113 21:29:52.145356 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-tigera-ca-bundle\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145677 kubelet[3350]: I0113 21:29:52.145397 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-net-dir\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145677 kubelet[3350]: I0113 21:29:52.145420 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-lib-modules\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145677 kubelet[3350]: I0113 21:29:52.145442 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-log-dir\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145677 kubelet[3350]: I0113 21:29:52.145477 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-flexvol-driver-host\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145900 kubelet[3350]: I0113 21:29:52.145505 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-run-calico\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.145900 kubelet[3350]: I0113 21:29:52.145533 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-bin-dir\") pod \"calico-node-jz2nr\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " pod="calico-system/calico-node-jz2nr" Jan 13 21:29:52.154034 systemd[1]: Created slice kubepods-besteffort-pod15779d9e_e5e1_4ea9_8a63_9efe7092cdc5.slice - libcontainer container kubepods-besteffort-pod15779d9e_e5e1_4ea9_8a63_9efe7092cdc5.slice. Jan 13 21:29:52.237262 containerd[1971]: time="2025-01-13T21:29:52.237218158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b86fb5df-f9x6m,Uid:cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65,Namespace:calico-system,Attempt:0,}" Jan 13 21:29:52.267889 kubelet[3350]: E0113 21:29:52.263280 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.267889 kubelet[3350]: W0113 21:29:52.266038 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.267889 kubelet[3350]: E0113 21:29:52.266086 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.279086 kubelet[3350]: E0113 21:29:52.270781 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.279086 kubelet[3350]: W0113 21:29:52.270806 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.279086 kubelet[3350]: E0113 21:29:52.270836 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.279086 kubelet[3350]: E0113 21:29:52.272748 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.279086 kubelet[3350]: W0113 21:29:52.272783 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.279086 kubelet[3350]: E0113 21:29:52.272807 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.279086 kubelet[3350]: E0113 21:29:52.278880 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.279459 kubelet[3350]: W0113 21:29:52.278904 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.284676 kubelet[3350]: E0113 21:29:52.280691 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.284676 kubelet[3350]: E0113 21:29:52.284485 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.285189 kubelet[3350]: W0113 21:29:52.284515 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.285390 kubelet[3350]: E0113 21:29:52.285369 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.300268 kubelet[3350]: E0113 21:29:52.299941 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.300268 kubelet[3350]: W0113 21:29:52.299975 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.300268 kubelet[3350]: E0113 21:29:52.300103 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.330606 containerd[1971]: time="2025-01-13T21:29:52.330115075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:52.330606 containerd[1971]: time="2025-01-13T21:29:52.330203784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:52.330606 containerd[1971]: time="2025-01-13T21:29:52.330227683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:52.330606 containerd[1971]: time="2025-01-13T21:29:52.330448127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:52.399376 systemd[1]: Started cri-containerd-4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5.scope - libcontainer container 4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5. Jan 13 21:29:52.462172 containerd[1971]: time="2025-01-13T21:29:52.462044699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz2nr,Uid:15779d9e-e5e1-4ea9-8a63-9efe7092cdc5,Namespace:calico-system,Attempt:0,}" Jan 13 21:29:52.497808 kubelet[3350]: E0113 21:29:52.494461 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:29:52.525121 containerd[1971]: time="2025-01-13T21:29:52.524389039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:52.525121 containerd[1971]: time="2025-01-13T21:29:52.524469709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:52.525121 containerd[1971]: time="2025-01-13T21:29:52.524505484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:52.525121 containerd[1971]: time="2025-01-13T21:29:52.524661729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:52.550856 kubelet[3350]: E0113 21:29:52.550564 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.550856 kubelet[3350]: W0113 21:29:52.550605 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.550856 kubelet[3350]: E0113 21:29:52.550671 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.552247 kubelet[3350]: E0113 21:29:52.552211 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.552247 kubelet[3350]: W0113 21:29:52.552237 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.552413 kubelet[3350]: E0113 21:29:52.552262 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.553476 kubelet[3350]: E0113 21:29:52.552894 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.553476 kubelet[3350]: W0113 21:29:52.552928 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.553942 kubelet[3350]: E0113 21:29:52.552942 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.556447 kubelet[3350]: E0113 21:29:52.556324 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.556447 kubelet[3350]: W0113 21:29:52.556349 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.556447 kubelet[3350]: E0113 21:29:52.556370 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.560496 kubelet[3350]: E0113 21:29:52.556921 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.560496 kubelet[3350]: W0113 21:29:52.556950 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.560496 kubelet[3350]: E0113 21:29:52.556968 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.560496 kubelet[3350]: E0113 21:29:52.558000 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.560496 kubelet[3350]: W0113 21:29:52.558016 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.560496 kubelet[3350]: E0113 21:29:52.558698 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.560860 kubelet[3350]: E0113 21:29:52.560618 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.560860 kubelet[3350]: W0113 21:29:52.560632 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.560860 kubelet[3350]: E0113 21:29:52.560684 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.561934 kubelet[3350]: E0113 21:29:52.561280 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.561934 kubelet[3350]: W0113 21:29:52.561297 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.561934 kubelet[3350]: E0113 21:29:52.561315 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.564311 kubelet[3350]: E0113 21:29:52.564160 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.564311 kubelet[3350]: W0113 21:29:52.564177 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.564311 kubelet[3350]: E0113 21:29:52.564202 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.566025 systemd[1]: Started cri-containerd-e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f.scope - libcontainer container e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f. Jan 13 21:29:52.568564 kubelet[3350]: E0113 21:29:52.566716 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.568564 kubelet[3350]: W0113 21:29:52.566732 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.568564 kubelet[3350]: E0113 21:29:52.566756 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.568564 kubelet[3350]: E0113 21:29:52.568041 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.568564 kubelet[3350]: W0113 21:29:52.568056 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.568564 kubelet[3350]: E0113 21:29:52.568075 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.568854 kubelet[3350]: E0113 21:29:52.568829 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.568854 kubelet[3350]: W0113 21:29:52.568842 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.568946 kubelet[3350]: E0113 21:29:52.568858 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.569351 kubelet[3350]: E0113 21:29:52.569319 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.569351 kubelet[3350]: W0113 21:29:52.569337 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.569537 kubelet[3350]: E0113 21:29:52.569351 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.570087 kubelet[3350]: E0113 21:29:52.570041 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.570087 kubelet[3350]: W0113 21:29:52.570059 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.570087 kubelet[3350]: E0113 21:29:52.570074 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.570440 kubelet[3350]: E0113 21:29:52.570422 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.570440 kubelet[3350]: W0113 21:29:52.570439 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.570623 kubelet[3350]: E0113 21:29:52.570452 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.571167 kubelet[3350]: E0113 21:29:52.570737 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.571167 kubelet[3350]: W0113 21:29:52.570747 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.571167 kubelet[3350]: E0113 21:29:52.570760 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.571167 kubelet[3350]: E0113 21:29:52.571093 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.571167 kubelet[3350]: W0113 21:29:52.571114 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.571167 kubelet[3350]: E0113 21:29:52.571128 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.572577 kubelet[3350]: E0113 21:29:52.571434 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.572577 kubelet[3350]: W0113 21:29:52.571444 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.572577 kubelet[3350]: E0113 21:29:52.571457 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.572577 kubelet[3350]: E0113 21:29:52.571978 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.572577 kubelet[3350]: W0113 21:29:52.571990 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.572577 kubelet[3350]: E0113 21:29:52.572004 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.573195 kubelet[3350]: E0113 21:29:52.572616 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.573195 kubelet[3350]: W0113 21:29:52.572629 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.573195 kubelet[3350]: E0113 21:29:52.572713 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.657212 kubelet[3350]: E0113 21:29:52.657177 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.657212 kubelet[3350]: W0113 21:29:52.657207 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.663657 kubelet[3350]: E0113 21:29:52.657233 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.664715 kubelet[3350]: I0113 21:29:52.663709 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4xk\" (UniqueName: \"kubernetes.io/projected/a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4-kube-api-access-ql4xk\") pod \"csi-node-driver-v4xcp\" (UID: \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\") " pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:29:52.664859 kubelet[3350]: E0113 21:29:52.664835 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.664859 kubelet[3350]: W0113 21:29:52.664853 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.664949 kubelet[3350]: E0113 21:29:52.664882 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.665555 kubelet[3350]: E0113 21:29:52.665391 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.665555 kubelet[3350]: W0113 21:29:52.665415 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.665555 kubelet[3350]: E0113 21:29:52.665433 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.666039 kubelet[3350]: E0113 21:29:52.665928 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.666039 kubelet[3350]: W0113 21:29:52.665947 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.666039 kubelet[3350]: E0113 21:29:52.665963 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.666039 kubelet[3350]: I0113 21:29:52.666002 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4-socket-dir\") pod \"csi-node-driver-v4xcp\" (UID: \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\") " pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:29:52.667006 kubelet[3350]: E0113 21:29:52.666382 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.667006 kubelet[3350]: W0113 21:29:52.666396 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.667006 kubelet[3350]: E0113 21:29:52.666413 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.673779 kubelet[3350]: E0113 21:29:52.672307 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.673779 kubelet[3350]: W0113 21:29:52.672333 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.673779 kubelet[3350]: E0113 21:29:52.672388 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.677534 kubelet[3350]: E0113 21:29:52.676925 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.677534 kubelet[3350]: W0113 21:29:52.676952 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.677534 kubelet[3350]: E0113 21:29:52.677051 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.677534 kubelet[3350]: I0113 21:29:52.677101 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4-varrun\") pod \"csi-node-driver-v4xcp\" (UID: \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\") " pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:29:52.679731 kubelet[3350]: E0113 21:29:52.678054 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.679731 kubelet[3350]: W0113 21:29:52.678077 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.679731 kubelet[3350]: E0113 21:29:52.678106 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.679731 kubelet[3350]: I0113 21:29:52.678136 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4-kubelet-dir\") pod \"csi-node-driver-v4xcp\" (UID: \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\") " pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:29:52.680979 kubelet[3350]: E0113 21:29:52.680834 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.680979 kubelet[3350]: W0113 21:29:52.680857 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.680979 kubelet[3350]: E0113 21:29:52.680889 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.680979 kubelet[3350]: I0113 21:29:52.680934 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4-registration-dir\") pod \"csi-node-driver-v4xcp\" (UID: \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\") " pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:29:52.682250 kubelet[3350]: E0113 21:29:52.682226 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.682250 kubelet[3350]: W0113 21:29:52.682250 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.682573 kubelet[3350]: E0113 21:29:52.682275 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.682795 kubelet[3350]: E0113 21:29:52.682749 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.682795 kubelet[3350]: W0113 21:29:52.682789 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.682969 kubelet[3350]: E0113 21:29:52.682881 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.684241 kubelet[3350]: E0113 21:29:52.684221 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.684241 kubelet[3350]: W0113 21:29:52.684240 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.684431 kubelet[3350]: E0113 21:29:52.684328 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.685097 kubelet[3350]: E0113 21:29:52.685080 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.685097 kubelet[3350]: W0113 21:29:52.685098 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.685546 kubelet[3350]: E0113 21:29:52.685118 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.686204 kubelet[3350]: E0113 21:29:52.686161 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.686204 kubelet[3350]: W0113 21:29:52.686176 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.686204 kubelet[3350]: E0113 21:29:52.686191 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.688327 kubelet[3350]: E0113 21:29:52.687466 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.688327 kubelet[3350]: W0113 21:29:52.687484 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.689192 kubelet[3350]: E0113 21:29:52.687500 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.701042 containerd[1971]: time="2025-01-13T21:29:52.700920792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz2nr,Uid:15779d9e-e5e1-4ea9-8a63-9efe7092cdc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\"" Jan 13 21:29:52.715360 containerd[1971]: time="2025-01-13T21:29:52.715089883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:29:52.782597 kubelet[3350]: E0113 21:29:52.782479 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.782597 kubelet[3350]: W0113 21:29:52.782504 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.782597 kubelet[3350]: E0113 21:29:52.782529 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.786561 kubelet[3350]: E0113 21:29:52.785452 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.786716 kubelet[3350]: W0113 21:29:52.786572 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.786716 kubelet[3350]: E0113 21:29:52.786606 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.789971 kubelet[3350]: E0113 21:29:52.789631 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.789971 kubelet[3350]: W0113 21:29:52.789963 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.790720 kubelet[3350]: E0113 21:29:52.790520 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.791955 kubelet[3350]: E0113 21:29:52.791725 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.791955 kubelet[3350]: W0113 21:29:52.791743 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.791955 kubelet[3350]: E0113 21:29:52.791770 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.805604 kubelet[3350]: E0113 21:29:52.805564 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.805604 kubelet[3350]: W0113 21:29:52.805594 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.808629 kubelet[3350]: E0113 21:29:52.805714 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.808629 kubelet[3350]: E0113 21:29:52.807839 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.808629 kubelet[3350]: W0113 21:29:52.807858 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.808629 kubelet[3350]: E0113 21:29:52.807881 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.810100 kubelet[3350]: E0113 21:29:52.808910 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.810100 kubelet[3350]: W0113 21:29:52.808926 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.810100 kubelet[3350]: E0113 21:29:52.808967 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.811982 kubelet[3350]: E0113 21:29:52.811947 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.812409 kubelet[3350]: W0113 21:29:52.812064 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.812409 kubelet[3350]: E0113 21:29:52.812092 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.812898 kubelet[3350]: E0113 21:29:52.812700 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.812898 kubelet[3350]: W0113 21:29:52.812712 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.812898 kubelet[3350]: E0113 21:29:52.812731 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.814888 kubelet[3350]: E0113 21:29:52.814869 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.814888 kubelet[3350]: W0113 21:29:52.814888 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.815066 kubelet[3350]: E0113 21:29:52.814904 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.815738 kubelet[3350]: E0113 21:29:52.815701 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.815815 kubelet[3350]: W0113 21:29:52.815803 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.815862 kubelet[3350]: E0113 21:29:52.815822 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.818976 kubelet[3350]: E0113 21:29:52.818954 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.818976 kubelet[3350]: W0113 21:29:52.818974 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.819178 kubelet[3350]: E0113 21:29:52.818995 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.819899 kubelet[3350]: E0113 21:29:52.819826 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.819899 kubelet[3350]: W0113 21:29:52.819843 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.820719 kubelet[3350]: E0113 21:29:52.819903 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.820719 kubelet[3350]: E0113 21:29:52.820443 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.820719 kubelet[3350]: W0113 21:29:52.820454 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.820719 kubelet[3350]: E0113 21:29:52.820468 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.823659 kubelet[3350]: E0113 21:29:52.822870 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.823659 kubelet[3350]: W0113 21:29:52.822886 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.824668 kubelet[3350]: E0113 21:29:52.824373 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.824668 kubelet[3350]: W0113 21:29:52.824386 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.825074 kubelet[3350]: E0113 21:29:52.825058 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.825133 kubelet[3350]: W0113 21:29:52.825074 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.825133 kubelet[3350]: E0113 21:29:52.825114 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.828579 kubelet[3350]: E0113 21:29:52.827895 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.828579 kubelet[3350]: W0113 21:29:52.827914 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.828579 kubelet[3350]: E0113 21:29:52.827933 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.828579 kubelet[3350]: E0113 21:29:52.828217 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.828579 kubelet[3350]: W0113 21:29:52.828227 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.828579 kubelet[3350]: E0113 21:29:52.828241 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.830503 kubelet[3350]: E0113 21:29:52.830474 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.830603 kubelet[3350]: E0113 21:29:52.830525 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.832823 kubelet[3350]: E0113 21:29:52.832351 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.832823 kubelet[3350]: W0113 21:29:52.832370 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.833838 kubelet[3350]: E0113 21:29:52.833803 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.834518 kubelet[3350]: W0113 21:29:52.833925 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.835035 containerd[1971]: time="2025-01-13T21:29:52.834862044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b86fb5df-f9x6m,Uid:cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\"" Jan 13 21:29:52.835468 kubelet[3350]: E0113 21:29:52.835410 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.835468 kubelet[3350]: W0113 21:29:52.835434 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.835468 kubelet[3350]: E0113 21:29:52.835455 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.836512 kubelet[3350]: E0113 21:29:52.835496 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.836512 kubelet[3350]: E0113 21:29:52.835513 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.837586 kubelet[3350]: E0113 21:29:52.837568 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.837893 kubelet[3350]: W0113 21:29:52.837728 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.837893 kubelet[3350]: E0113 21:29:52.838115 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.838742 kubelet[3350]: E0113 21:29:52.838704 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.838742 kubelet[3350]: W0113 21:29:52.838719 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.838828 kubelet[3350]: E0113 21:29:52.838751 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.839822 kubelet[3350]: E0113 21:29:52.839729 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.839822 kubelet[3350]: W0113 21:29:52.839747 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.839822 kubelet[3350]: E0113 21:29:52.839764 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:52.853007 kubelet[3350]: E0113 21:29:52.852911 3350 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:29:52.853007 kubelet[3350]: W0113 21:29:52.852938 3350 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:29:52.853007 kubelet[3350]: E0113 21:29:52.852958 3350 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:29:53.803774 kubelet[3350]: E0113 21:29:53.802157 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:29:54.084131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267367559.mount: Deactivated successfully. Jan 13 21:29:54.302299 containerd[1971]: time="2025-01-13T21:29:54.301994927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:54.306100 containerd[1971]: time="2025-01-13T21:29:54.305753460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:29:54.308623 containerd[1971]: time="2025-01-13T21:29:54.308466371Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:54.314401 containerd[1971]: time="2025-01-13T21:29:54.314344706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:54.316702 containerd[1971]: time="2025-01-13T21:29:54.315790873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.600652886s" Jan 13 21:29:54.316702 containerd[1971]: time="2025-01-13T21:29:54.315869086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:29:54.324415 containerd[1971]: time="2025-01-13T21:29:54.324140138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:29:54.355802 containerd[1971]: time="2025-01-13T21:29:54.351589366Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:29:54.396624 containerd[1971]: time="2025-01-13T21:29:54.396573645Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\"" Jan 13 21:29:54.398724 containerd[1971]: time="2025-01-13T21:29:54.397500403Z" level=info msg="StartContainer for \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\"" Jan 13 21:29:54.449265 systemd[1]: run-containerd-runc-k8s.io-8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a-runc.wXkyad.mount: Deactivated successfully. Jan 13 21:29:54.466516 systemd[1]: Started cri-containerd-8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a.scope - libcontainer container 8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a. Jan 13 21:29:54.520553 containerd[1971]: time="2025-01-13T21:29:54.520442261Z" level=info msg="StartContainer for \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\" returns successfully" Jan 13 21:29:54.543012 systemd[1]: cri-containerd-8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a.scope: Deactivated successfully. Jan 13 21:29:54.680086 containerd[1971]: time="2025-01-13T21:29:54.676718885Z" level=info msg="shim disconnected" id=8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a namespace=k8s.io Jan 13 21:29:54.686927 containerd[1971]: time="2025-01-13T21:29:54.684889230Z" level=warning msg="cleaning up after shim disconnected" id=8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a namespace=k8s.io Jan 13 21:29:54.686927 containerd[1971]: time="2025-01-13T21:29:54.684929983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:29:55.083513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a-rootfs.mount: Deactivated successfully. Jan 13 21:29:55.802516 kubelet[3350]: E0113 21:29:55.802384 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:29:57.027144 containerd[1971]: time="2025-01-13T21:29:57.027090755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:57.028957 containerd[1971]: time="2025-01-13T21:29:57.028905513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:29:57.030159 containerd[1971]: time="2025-01-13T21:29:57.030051777Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:57.032821 containerd[1971]: time="2025-01-13T21:29:57.032737596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:57.034886 containerd[1971]: time="2025-01-13T21:29:57.034616311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.71031987s" Jan 13 21:29:57.035906 containerd[1971]: time="2025-01-13T21:29:57.035870863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:29:57.038074 containerd[1971]: time="2025-01-13T21:29:57.038019112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:29:57.087293 containerd[1971]: time="2025-01-13T21:29:57.087163801Z" level=info msg="CreateContainer within sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:29:57.125062 containerd[1971]: time="2025-01-13T21:29:57.124938602Z" level=info msg="CreateContainer within sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\"" Jan 13 21:29:57.126168 containerd[1971]: time="2025-01-13T21:29:57.126069368Z" level=info msg="StartContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\"" Jan 13 21:29:57.218852 systemd[1]: Started cri-containerd-d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7.scope - libcontainer container d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7. Jan 13 21:29:57.308923 containerd[1971]: time="2025-01-13T21:29:57.308699893Z" level=info msg="StartContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" returns successfully" Jan 13 21:29:57.801898 kubelet[3350]: E0113 21:29:57.801702 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:29:58.931999 kubelet[3350]: I0113 21:29:58.931851 3350 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:29:59.801983 kubelet[3350]: E0113 21:29:59.801932 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:01.803765 kubelet[3350]: E0113 21:30:01.803690 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:03.704862 containerd[1971]: time="2025-01-13T21:30:03.704806231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:03.706149 containerd[1971]: time="2025-01-13T21:30:03.705994994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:30:03.708366 containerd[1971]: time="2025-01-13T21:30:03.708320928Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:03.712723 containerd[1971]: time="2025-01-13T21:30:03.712673969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:03.713961 containerd[1971]: time="2025-01-13T21:30:03.713810137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.675752699s" Jan 13 21:30:03.713961 containerd[1971]: time="2025-01-13T21:30:03.713854177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:30:03.719035 containerd[1971]: time="2025-01-13T21:30:03.718978536Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:30:03.803191 kubelet[3350]: E0113 21:30:03.802721 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:03.997767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440786263.mount: Deactivated successfully. Jan 13 21:30:04.004569 containerd[1971]: time="2025-01-13T21:30:04.001800268Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\"" Jan 13 21:30:04.004569 containerd[1971]: time="2025-01-13T21:30:04.003479662Z" level=info msg="StartContainer for \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\"" Jan 13 21:30:04.102329 systemd[1]: Started cri-containerd-df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23.scope - libcontainer container df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23. Jan 13 21:30:04.169220 containerd[1971]: time="2025-01-13T21:30:04.169167227Z" level=info msg="StartContainer for \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\" returns successfully" Jan 13 21:30:05.387319 kubelet[3350]: I0113 21:30:05.385745 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77b86fb5df-f9x6m" podStartSLOduration=10.189265843 podStartE2EDuration="14.385725541s" podCreationTimestamp="2025-01-13 21:29:51 +0000 UTC" firstStartedPulling="2025-01-13 21:29:52.841096321 +0000 UTC m=+12.310773785" lastFinishedPulling="2025-01-13 21:29:57.037556014 +0000 UTC m=+16.507233483" observedRunningTime="2025-01-13 21:29:57.995805001 +0000 UTC m=+17.465482475" watchObservedRunningTime="2025-01-13 21:30:05.385725541 +0000 UTC m=+24.855403004" Jan 13 21:30:05.803665 kubelet[3350]: E0113 21:30:05.802094 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:06.632921 systemd[1]: cri-containerd-df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23.scope: Deactivated successfully. Jan 13 21:30:06.681910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23-rootfs.mount: Deactivated successfully. Jan 13 21:30:06.690221 containerd[1971]: time="2025-01-13T21:30:06.690155180Z" level=info msg="shim disconnected" id=df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23 namespace=k8s.io Jan 13 21:30:06.690221 containerd[1971]: time="2025-01-13T21:30:06.690218094Z" level=warning msg="cleaning up after shim disconnected" id=df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23 namespace=k8s.io Jan 13 21:30:06.690221 containerd[1971]: time="2025-01-13T21:30:06.690229389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:06.745158 containerd[1971]: time="2025-01-13T21:30:06.744842613Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:30:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:30:06.757548 kubelet[3350]: I0113 21:30:06.757517 3350 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:30:06.884830 systemd[1]: Created slice kubepods-burstable-pod85e576e6_d66c_4263_a4ec_9e1bd46d45d0.slice - libcontainer container kubepods-burstable-pod85e576e6_d66c_4263_a4ec_9e1bd46d45d0.slice. Jan 13 21:30:06.909447 kubelet[3350]: W0113 21:30:06.893116 3350 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-253" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-253' and this object Jan 13 21:30:06.909447 kubelet[3350]: E0113 21:30:06.893182 3350 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-18-253\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-253' and this object" logger="UnhandledError" Jan 13 21:30:06.926094 systemd[1]: Created slice kubepods-burstable-pod6df22146_8f07_4f5d_bc45_b3dcbe228775.slice - libcontainer container kubepods-burstable-pod6df22146_8f07_4f5d_bc45_b3dcbe228775.slice. Jan 13 21:30:06.944042 systemd[1]: Created slice kubepods-besteffort-poda863c6f7_9d33_4cc7_acb9_6720fe35112d.slice - libcontainer container kubepods-besteffort-poda863c6f7_9d33_4cc7_acb9_6720fe35112d.slice. Jan 13 21:30:06.955519 systemd[1]: Created slice kubepods-besteffort-pod66928d5a_cb4e_4c35_8a71_cae23340ac99.slice - libcontainer container kubepods-besteffort-pod66928d5a_cb4e_4c35_8a71_cae23340ac99.slice. Jan 13 21:30:06.971482 systemd[1]: Created slice kubepods-besteffort-podcd1f0189_02c3_4f32_9cfa_9e41e4d3764b.slice - libcontainer container kubepods-besteffort-podcd1f0189_02c3_4f32_9cfa_9e41e4d3764b.slice. Jan 13 21:30:07.005717 kubelet[3350]: I0113 21:30:07.005672 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6df22146-8f07-4f5d-bc45-b3dcbe228775-config-volume\") pod \"coredns-6f6b679f8f-r85h4\" (UID: \"6df22146-8f07-4f5d-bc45-b3dcbe228775\") " pod="kube-system/coredns-6f6b679f8f-r85h4" Jan 13 21:30:07.005870 kubelet[3350]: I0113 21:30:07.005726 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkk8c\" (UniqueName: \"kubernetes.io/projected/85e576e6-d66c-4263-a4ec-9e1bd46d45d0-kube-api-access-xkk8c\") pod \"coredns-6f6b679f8f-trkmh\" (UID: \"85e576e6-d66c-4263-a4ec-9e1bd46d45d0\") " pod="kube-system/coredns-6f6b679f8f-trkmh" Jan 13 21:30:07.005870 kubelet[3350]: I0113 21:30:07.005756 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vc7r\" (UniqueName: \"kubernetes.io/projected/a863c6f7-9d33-4cc7-acb9-6720fe35112d-kube-api-access-4vc7r\") pod \"calico-kube-controllers-64cf758d46-dk7vw\" (UID: \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\") " pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" Jan 13 21:30:07.005870 kubelet[3350]: I0113 21:30:07.005781 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95kv\" (UniqueName: \"kubernetes.io/projected/6df22146-8f07-4f5d-bc45-b3dcbe228775-kube-api-access-d95kv\") pod \"coredns-6f6b679f8f-r85h4\" (UID: \"6df22146-8f07-4f5d-bc45-b3dcbe228775\") " pod="kube-system/coredns-6f6b679f8f-r85h4" Jan 13 21:30:07.005870 kubelet[3350]: I0113 21:30:07.005802 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85e576e6-d66c-4263-a4ec-9e1bd46d45d0-config-volume\") pod \"coredns-6f6b679f8f-trkmh\" (UID: \"85e576e6-d66c-4263-a4ec-9e1bd46d45d0\") " pod="kube-system/coredns-6f6b679f8f-trkmh" Jan 13 21:30:07.005870 kubelet[3350]: I0113 21:30:07.005837 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a863c6f7-9d33-4cc7-acb9-6720fe35112d-tigera-ca-bundle\") pod \"calico-kube-controllers-64cf758d46-dk7vw\" (UID: \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\") " pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" Jan 13 21:30:07.020523 containerd[1971]: time="2025-01-13T21:30:07.020479860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:30:07.106861 kubelet[3350]: I0113 21:30:07.106818 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zdx\" (UniqueName: \"kubernetes.io/projected/66928d5a-cb4e-4c35-8a71-cae23340ac99-kube-api-access-z2zdx\") pod \"calico-apiserver-6c6b78d879-nt4jz\" (UID: \"66928d5a-cb4e-4c35-8a71-cae23340ac99\") " pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" Jan 13 21:30:07.107013 kubelet[3350]: I0113 21:30:07.106919 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/66928d5a-cb4e-4c35-8a71-cae23340ac99-calico-apiserver-certs\") pod \"calico-apiserver-6c6b78d879-nt4jz\" (UID: \"66928d5a-cb4e-4c35-8a71-cae23340ac99\") " pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" Jan 13 21:30:07.107013 kubelet[3350]: I0113 21:30:07.106960 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd1f0189-02c3-4f32-9cfa-9e41e4d3764b-calico-apiserver-certs\") pod \"calico-apiserver-6c6b78d879-25mr6\" (UID: \"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b\") " pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" Jan 13 21:30:07.107220 kubelet[3350]: I0113 21:30:07.107016 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bdn\" (UniqueName: \"kubernetes.io/projected/cd1f0189-02c3-4f32-9cfa-9e41e4d3764b-kube-api-access-v9bdn\") pod \"calico-apiserver-6c6b78d879-25mr6\" (UID: \"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b\") " pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" Jan 13 21:30:07.254157 containerd[1971]: time="2025-01-13T21:30:07.254115186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cf758d46-dk7vw,Uid:a863c6f7-9d33-4cc7-acb9-6720fe35112d,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:07.263577 containerd[1971]: time="2025-01-13T21:30:07.263530584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-nt4jz,Uid:66928d5a-cb4e-4c35-8a71-cae23340ac99,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:30:07.283918 containerd[1971]: time="2025-01-13T21:30:07.283868231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-25mr6,Uid:cd1f0189-02c3-4f32-9cfa-9e41e4d3764b,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:30:07.813845 systemd[1]: Created slice kubepods-besteffort-poda6e9e58c_0aa3_40c9_acb9_ed2d79b35ed4.slice - libcontainer container kubepods-besteffort-poda6e9e58c_0aa3_40c9_acb9_ed2d79b35ed4.slice. Jan 13 21:30:07.820445 containerd[1971]: time="2025-01-13T21:30:07.819846239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4xcp,Uid:a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:07.916470 containerd[1971]: time="2025-01-13T21:30:07.916387632Z" level=error msg="Failed to destroy network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.922356 containerd[1971]: time="2025-01-13T21:30:07.922270382Z" level=error msg="encountered an error cleaning up failed sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.927818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77-shm.mount: Deactivated successfully. Jan 13 21:30:07.930267 containerd[1971]: time="2025-01-13T21:30:07.922586227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cf758d46-dk7vw,Uid:a863c6f7-9d33-4cc7-acb9-6720fe35112d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.936804 kubelet[3350]: E0113 21:30:07.934765 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.936804 kubelet[3350]: E0113 21:30:07.934960 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" Jan 13 21:30:07.936804 kubelet[3350]: E0113 21:30:07.934990 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" Jan 13 21:30:07.938077 kubelet[3350]: E0113 21:30:07.937879 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64cf758d46-dk7vw_calico-system(a863c6f7-9d33-4cc7-acb9-6720fe35112d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64cf758d46-dk7vw_calico-system(a863c6f7-9d33-4cc7-acb9-6720fe35112d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" podUID="a863c6f7-9d33-4cc7-acb9-6720fe35112d" Jan 13 21:30:07.948532 containerd[1971]: time="2025-01-13T21:30:07.947938541Z" level=error msg="Failed to destroy network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.951161 containerd[1971]: time="2025-01-13T21:30:07.950241145Z" level=error msg="encountered an error cleaning up failed sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.951161 containerd[1971]: time="2025-01-13T21:30:07.950445496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-nt4jz,Uid:66928d5a-cb4e-4c35-8a71-cae23340ac99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.953934 kubelet[3350]: E0113 21:30:07.950827 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.953934 kubelet[3350]: E0113 21:30:07.950891 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" Jan 13 21:30:07.953934 kubelet[3350]: E0113 21:30:07.950917 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" Jan 13 21:30:07.954626 kubelet[3350]: E0113 21:30:07.950967 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6b78d879-nt4jz_calico-apiserver(66928d5a-cb4e-4c35-8a71-cae23340ac99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6b78d879-nt4jz_calico-apiserver(66928d5a-cb4e-4c35-8a71-cae23340ac99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" podUID="66928d5a-cb4e-4c35-8a71-cae23340ac99" Jan 13 21:30:07.958906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd-shm.mount: Deactivated successfully. Jan 13 21:30:07.965301 containerd[1971]: time="2025-01-13T21:30:07.965243782Z" level=error msg="Failed to destroy network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.970982 containerd[1971]: time="2025-01-13T21:30:07.970929153Z" level=error msg="encountered an error cleaning up failed sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.971117 containerd[1971]: time="2025-01-13T21:30:07.971004749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-25mr6,Uid:cd1f0189-02c3-4f32-9cfa-9e41e4d3764b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.974035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775-shm.mount: Deactivated successfully. Jan 13 21:30:07.976048 kubelet[3350]: E0113 21:30:07.975923 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:07.978180 kubelet[3350]: E0113 21:30:07.976230 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" Jan 13 21:30:07.978180 kubelet[3350]: E0113 21:30:07.977476 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" Jan 13 21:30:07.978180 kubelet[3350]: E0113 21:30:07.977597 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6b78d879-25mr6_calico-apiserver(cd1f0189-02c3-4f32-9cfa-9e41e4d3764b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6b78d879-25mr6_calico-apiserver(cd1f0189-02c3-4f32-9cfa-9e41e4d3764b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" podUID="cd1f0189-02c3-4f32-9cfa-9e41e4d3764b" Jan 13 21:30:08.022566 kubelet[3350]: I0113 21:30:08.022534 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:08.046196 containerd[1971]: time="2025-01-13T21:30:08.046143093Z" level=info msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" Jan 13 21:30:08.048485 kubelet[3350]: I0113 21:30:08.047613 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:08.050198 containerd[1971]: time="2025-01-13T21:30:08.049223239Z" level=info msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" Jan 13 21:30:08.050198 containerd[1971]: time="2025-01-13T21:30:08.049752305Z" level=info msg="Ensure that sandbox 629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775 in task-service has been cleanup successfully" Jan 13 21:30:08.050866 containerd[1971]: time="2025-01-13T21:30:08.050759084Z" level=info msg="Ensure that sandbox 929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd in task-service has been cleanup successfully" Jan 13 21:30:08.059904 kubelet[3350]: I0113 21:30:08.057018 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:08.061473 containerd[1971]: time="2025-01-13T21:30:08.061216632Z" level=info msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" Jan 13 21:30:08.062703 containerd[1971]: time="2025-01-13T21:30:08.062666334Z" level=info msg="Ensure that sandbox dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77 in task-service has been cleanup successfully" Jan 13 21:30:08.110549 kubelet[3350]: E0113 21:30:08.110361 3350 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:30:08.112052 kubelet[3350]: E0113 21:30:08.111121 3350 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6df22146-8f07-4f5d-bc45-b3dcbe228775-config-volume podName:6df22146-8f07-4f5d-bc45-b3dcbe228775 nodeName:}" failed. No retries permitted until 2025-01-13 21:30:08.611093736 +0000 UTC m=+28.080771206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6df22146-8f07-4f5d-bc45-b3dcbe228775-config-volume") pod "coredns-6f6b679f8f-r85h4" (UID: "6df22146-8f07-4f5d-bc45-b3dcbe228775") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:30:08.112052 kubelet[3350]: E0113 21:30:08.110997 3350 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:30:08.112052 kubelet[3350]: E0113 21:30:08.111843 3350 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85e576e6-d66c-4263-a4ec-9e1bd46d45d0-config-volume podName:85e576e6-d66c-4263-a4ec-9e1bd46d45d0 nodeName:}" failed. No retries permitted until 2025-01-13 21:30:08.611720836 +0000 UTC m=+28.081398300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85e576e6-d66c-4263-a4ec-9e1bd46d45d0-config-volume") pod "coredns-6f6b679f8f-trkmh" (UID: "85e576e6-d66c-4263-a4ec-9e1bd46d45d0") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:30:08.164768 containerd[1971]: time="2025-01-13T21:30:08.163974644Z" level=error msg="Failed to destroy network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.164768 containerd[1971]: time="2025-01-13T21:30:08.164468679Z" level=error msg="encountered an error cleaning up failed sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.164768 containerd[1971]: time="2025-01-13T21:30:08.164534131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4xcp,Uid:a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.165862 kubelet[3350]: E0113 21:30:08.165141 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.165862 kubelet[3350]: E0113 21:30:08.165211 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:30:08.165862 kubelet[3350]: E0113 21:30:08.165239 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4xcp" Jan 13 21:30:08.166026 kubelet[3350]: E0113 21:30:08.165289 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v4xcp_calico-system(a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v4xcp_calico-system(a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:08.206761 containerd[1971]: time="2025-01-13T21:30:08.206702508Z" level=error msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" failed" error="failed to destroy network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.207062 kubelet[3350]: E0113 21:30:08.207024 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:08.207160 kubelet[3350]: E0113 21:30:08.207090 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775"} Jan 13 21:30:08.207245 kubelet[3350]: E0113 21:30:08.207180 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:08.207245 kubelet[3350]: E0113 21:30:08.207214 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" podUID="cd1f0189-02c3-4f32-9cfa-9e41e4d3764b" Jan 13 21:30:08.210880 containerd[1971]: time="2025-01-13T21:30:08.210831564Z" level=error msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" failed" error="failed to destroy network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.211234 kubelet[3350]: E0113 21:30:08.211188 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:08.211351 kubelet[3350]: E0113 21:30:08.211248 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd"} Jan 13 21:30:08.211351 kubelet[3350]: E0113 21:30:08.211290 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66928d5a-cb4e-4c35-8a71-cae23340ac99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:08.211351 kubelet[3350]: E0113 21:30:08.211324 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66928d5a-cb4e-4c35-8a71-cae23340ac99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" podUID="66928d5a-cb4e-4c35-8a71-cae23340ac99" Jan 13 21:30:08.220848 containerd[1971]: time="2025-01-13T21:30:08.220799958Z" level=error msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" failed" error="failed to destroy network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.221323 kubelet[3350]: E0113 21:30:08.221031 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:08.221323 kubelet[3350]: E0113 21:30:08.221133 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77"} Jan 13 21:30:08.221323 kubelet[3350]: E0113 21:30:08.221165 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:08.221323 kubelet[3350]: E0113 21:30:08.221189 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" podUID="a863c6f7-9d33-4cc7-acb9-6720fe35112d" Jan 13 21:30:08.681492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897-shm.mount: Deactivated successfully. Jan 13 21:30:08.689896 containerd[1971]: time="2025-01-13T21:30:08.689852515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trkmh,Uid:85e576e6-d66c-4263-a4ec-9e1bd46d45d0,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:08.744808 containerd[1971]: time="2025-01-13T21:30:08.744753377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r85h4,Uid:6df22146-8f07-4f5d-bc45-b3dcbe228775,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:08.809799 containerd[1971]: time="2025-01-13T21:30:08.809486393Z" level=error msg="Failed to destroy network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.814138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0-shm.mount: Deactivated successfully. Jan 13 21:30:08.814987 containerd[1971]: time="2025-01-13T21:30:08.814763081Z" level=error msg="encountered an error cleaning up failed sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.814987 containerd[1971]: time="2025-01-13T21:30:08.814836499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trkmh,Uid:85e576e6-d66c-4263-a4ec-9e1bd46d45d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.817935 kubelet[3350]: E0113 21:30:08.815737 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.817935 kubelet[3350]: E0113 21:30:08.815860 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-trkmh" Jan 13 21:30:08.817935 kubelet[3350]: E0113 21:30:08.815886 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-trkmh" Jan 13 21:30:08.818159 kubelet[3350]: E0113 21:30:08.815937 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-trkmh_kube-system(85e576e6-d66c-4263-a4ec-9e1bd46d45d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-trkmh_kube-system(85e576e6-d66c-4263-a4ec-9e1bd46d45d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-trkmh" podUID="85e576e6-d66c-4263-a4ec-9e1bd46d45d0" Jan 13 21:30:08.867631 containerd[1971]: time="2025-01-13T21:30:08.867571996Z" level=error msg="Failed to destroy network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.868711 containerd[1971]: time="2025-01-13T21:30:08.868566135Z" level=error msg="encountered an error cleaning up failed sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.869399 containerd[1971]: time="2025-01-13T21:30:08.868705646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r85h4,Uid:6df22146-8f07-4f5d-bc45-b3dcbe228775,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.871452 kubelet[3350]: E0113 21:30:08.869481 3350 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:08.871452 kubelet[3350]: E0113 21:30:08.869731 3350 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-r85h4" Jan 13 21:30:08.871452 kubelet[3350]: E0113 21:30:08.869815 3350 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-r85h4" Jan 13 21:30:08.871811 kubelet[3350]: E0113 21:30:08.869886 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-r85h4_kube-system(6df22146-8f07-4f5d-bc45-b3dcbe228775)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-r85h4_kube-system(6df22146-8f07-4f5d-bc45-b3dcbe228775)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-r85h4" podUID="6df22146-8f07-4f5d-bc45-b3dcbe228775" Jan 13 21:30:08.874299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b-shm.mount: Deactivated successfully. Jan 13 21:30:09.064920 kubelet[3350]: I0113 21:30:09.064761 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:09.069767 containerd[1971]: time="2025-01-13T21:30:09.069409319Z" level=info msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" Jan 13 21:30:09.076242 containerd[1971]: time="2025-01-13T21:30:09.075862420Z" level=info msg="Ensure that sandbox 3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b in task-service has been cleanup successfully" Jan 13 21:30:09.081584 kubelet[3350]: I0113 21:30:09.081541 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:09.091316 containerd[1971]: time="2025-01-13T21:30:09.091031889Z" level=info msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" Jan 13 21:30:09.094040 containerd[1971]: time="2025-01-13T21:30:09.092676414Z" level=info msg="Ensure that sandbox 75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0 in task-service has been cleanup successfully" Jan 13 21:30:09.094163 kubelet[3350]: I0113 21:30:09.093980 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:09.095528 containerd[1971]: time="2025-01-13T21:30:09.094620292Z" level=info msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" Jan 13 21:30:09.095528 containerd[1971]: time="2025-01-13T21:30:09.095038047Z" level=info msg="Ensure that sandbox 971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897 in task-service has been cleanup successfully" Jan 13 21:30:09.208388 containerd[1971]: time="2025-01-13T21:30:09.208325534Z" level=error msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" failed" error="failed to destroy network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:09.208886 kubelet[3350]: E0113 21:30:09.208700 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:09.208886 kubelet[3350]: E0113 21:30:09.208763 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0"} Jan 13 21:30:09.208886 kubelet[3350]: E0113 21:30:09.208809 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85e576e6-d66c-4263-a4ec-9e1bd46d45d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:09.208886 kubelet[3350]: E0113 21:30:09.208846 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85e576e6-d66c-4263-a4ec-9e1bd46d45d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-trkmh" podUID="85e576e6-d66c-4263-a4ec-9e1bd46d45d0" Jan 13 21:30:09.219304 containerd[1971]: time="2025-01-13T21:30:09.219252684Z" level=error msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" failed" error="failed to destroy network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:09.219597 containerd[1971]: time="2025-01-13T21:30:09.219410491Z" level=error msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" failed" error="failed to destroy network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:09.219761 kubelet[3350]: E0113 21:30:09.219523 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:09.219761 kubelet[3350]: E0113 21:30:09.219575 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b"} Jan 13 21:30:09.219761 kubelet[3350]: E0113 21:30:09.219622 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6df22146-8f07-4f5d-bc45-b3dcbe228775\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:09.219761 kubelet[3350]: E0113 21:30:09.219669 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6df22146-8f07-4f5d-bc45-b3dcbe228775\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-r85h4" podUID="6df22146-8f07-4f5d-bc45-b3dcbe228775" Jan 13 21:30:09.220141 kubelet[3350]: E0113 21:30:09.219819 3350 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:09.220141 kubelet[3350]: E0113 21:30:09.219864 3350 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897"} Jan 13 21:30:09.220141 kubelet[3350]: E0113 21:30:09.219897 3350 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:09.220141 kubelet[3350]: E0113 21:30:09.219924 3350 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v4xcp" podUID="a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4" Jan 13 21:30:09.718149 kubelet[3350]: I0113 21:30:09.717710 3350 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:17.864841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618669074.mount: Deactivated successfully. Jan 13 21:30:17.951826 containerd[1971]: time="2025-01-13T21:30:17.939037929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:17.979345 containerd[1971]: time="2025-01-13T21:30:17.944672976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:30:18.000254 containerd[1971]: time="2025-01-13T21:30:18.000203215Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:18.010014 containerd[1971]: time="2025-01-13T21:30:18.009960747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:18.012025 containerd[1971]: time="2025-01-13T21:30:18.011216107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.99067868s" Jan 13 21:30:18.012025 containerd[1971]: time="2025-01-13T21:30:18.011450419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:30:18.112942 containerd[1971]: time="2025-01-13T21:30:18.112896773Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:30:18.230952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131582594.mount: Deactivated successfully. Jan 13 21:30:18.258498 containerd[1971]: time="2025-01-13T21:30:18.258357956Z" level=info msg="CreateContainer within sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\"" Jan 13 21:30:18.277889 containerd[1971]: time="2025-01-13T21:30:18.276028644Z" level=info msg="StartContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\"" Jan 13 21:30:18.557010 systemd[1]: Started cri-containerd-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625.scope - libcontainer container 6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625. Jan 13 21:30:18.664913 containerd[1971]: time="2025-01-13T21:30:18.664337841Z" level=info msg="StartContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" returns successfully" Jan 13 21:30:19.049434 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:30:19.125053 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:30:19.809619 containerd[1971]: time="2025-01-13T21:30:19.809509285Z" level=info msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" Jan 13 21:30:19.991551 kubelet[3350]: I0113 21:30:19.976802 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jz2nr" podStartSLOduration=2.643985851 podStartE2EDuration="27.946631814s" podCreationTimestamp="2025-01-13 21:29:52 +0000 UTC" firstStartedPulling="2025-01-13 21:29:52.710532848 +0000 UTC m=+12.180210320" lastFinishedPulling="2025-01-13 21:30:18.013178831 +0000 UTC m=+37.482856283" observedRunningTime="2025-01-13 21:30:19.383713471 +0000 UTC m=+38.853390943" watchObservedRunningTime="2025-01-13 21:30:19.946631814 +0000 UTC m=+39.416309286" Jan 13 21:30:20.319360 systemd[1]: run-containerd-runc-k8s.io-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625-runc.aXfqeO.mount: Deactivated successfully. Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.938 [INFO][4495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.939 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" iface="eth0" netns="/var/run/netns/cni-d7cdb924-85a7-f92a-00df-9a09168e592c" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.939 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" iface="eth0" netns="/var/run/netns/cni-d7cdb924-85a7-f92a-00df-9a09168e592c" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.948 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" iface="eth0" netns="/var/run/netns/cni-d7cdb924-85a7-f92a-00df-9a09168e592c" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.948 [INFO][4495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:19.948 [INFO][4495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.371 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.378 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.380 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.404 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.404 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.406 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:20.412810 containerd[1971]: 2025-01-13 21:30:20.409 [INFO][4495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:20.414269 containerd[1971]: time="2025-01-13T21:30:20.413060080Z" level=info msg="TearDown network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" successfully" Jan 13 21:30:20.414269 containerd[1971]: time="2025-01-13T21:30:20.413094072Z" level=info msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" returns successfully" Jan 13 21:30:20.418274 systemd[1]: run-netns-cni\x2dd7cdb924\x2d85a7\x2df92a\x2d00df\x2d9a09168e592c.mount: Deactivated successfully. Jan 13 21:30:20.446789 containerd[1971]: time="2025-01-13T21:30:20.440921417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-25mr6,Uid:cd1f0189-02c3-4f32-9cfa-9e41e4d3764b,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:30:20.873014 (udev-worker)[4575]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:30:20.875293 systemd-networkd[1894]: cali757e7536d21: Link UP Jan 13 21:30:20.875786 systemd-networkd[1894]: cali757e7536d21: Gained carrier Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.626 [INFO][4529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.642 [INFO][4529] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0 calico-apiserver-6c6b78d879- calico-apiserver cd1f0189-02c3-4f32-9cfa-9e41e4d3764b 814 0 2025-01-13 21:29:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6b78d879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-253 calico-apiserver-6c6b78d879-25mr6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali757e7536d21 [] []}} ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.642 [INFO][4529] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.695 [INFO][4541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" HandleID="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.722 [INFO][4541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" HandleID="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-253", "pod":"calico-apiserver-6c6b78d879-25mr6", "timestamp":"2025-01-13 21:30:20.695719596 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.722 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.722 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.722 [INFO][4541] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.727 [INFO][4541] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.746 [INFO][4541] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.759 [INFO][4541] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.762 [INFO][4541] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.770 [INFO][4541] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.771 [INFO][4541] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.776 [INFO][4541] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503 Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.789 [INFO][4541] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.805 [INFO][4541] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.65/26] block=192.168.74.64/26 handle="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.806 [INFO][4541] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.65/26] handle="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" host="ip-172-31-18-253" Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.806 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:20.953825 containerd[1971]: 2025-01-13 21:30:20.806 [INFO][4541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.65/26] IPv6=[] ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" HandleID="k8s-pod-network.85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.818 [INFO][4529] cni-plugin/k8s.go 386: Populated endpoint ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"calico-apiserver-6c6b78d879-25mr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757e7536d21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.820 [INFO][4529] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.65/32] ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.820 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali757e7536d21 ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.854 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.860 [INFO][4529] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503", Pod:"calico-apiserver-6c6b78d879-25mr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757e7536d21", MAC:"6a:ce:61:10:93:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:20.980889 containerd[1971]: 2025-01-13 21:30:20.929 [INFO][4529] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-25mr6" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:21.098377 containerd[1971]: time="2025-01-13T21:30:21.097082758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:21.098531 containerd[1971]: time="2025-01-13T21:30:21.098429324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:21.098531 containerd[1971]: time="2025-01-13T21:30:21.098507840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:21.098963 containerd[1971]: time="2025-01-13T21:30:21.098891356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:21.243918 systemd[1]: Started cri-containerd-85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503.scope - libcontainer container 85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503. Jan 13 21:30:21.530800 containerd[1971]: time="2025-01-13T21:30:21.530363262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-25mr6,Uid:cd1f0189-02c3-4f32-9cfa-9e41e4d3764b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503\"" Jan 13 21:30:21.546387 containerd[1971]: time="2025-01-13T21:30:21.545019278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:30:21.807302 containerd[1971]: time="2025-01-13T21:30:21.807096542Z" level=info msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" Jan 13 21:30:21.815125 containerd[1971]: time="2025-01-13T21:30:21.807616177Z" level=info msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" Jan 13 21:30:21.822888 containerd[1971]: time="2025-01-13T21:30:21.807690993Z" level=info msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" Jan 13 21:30:22.013316 systemd-networkd[1894]: cali757e7536d21: Gained IPv6LL Jan 13 21:30:22.067804 kernel: bpftool[4767]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.054 [INFO][4741] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.054 [INFO][4741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" iface="eth0" netns="/var/run/netns/cni-a2dba3c4-2f39-4331-19df-99ff67398f94" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.054 [INFO][4741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" iface="eth0" netns="/var/run/netns/cni-a2dba3c4-2f39-4331-19df-99ff67398f94" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.055 [INFO][4741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" iface="eth0" netns="/var/run/netns/cni-a2dba3c4-2f39-4331-19df-99ff67398f94" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.055 [INFO][4741] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.055 [INFO][4741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.242 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.247 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.247 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.275 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.275 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.285 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:22.334654 containerd[1971]: 2025-01-13 21:30:22.317 [INFO][4741] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:22.341613 containerd[1971]: time="2025-01-13T21:30:22.337613542Z" level=info msg="TearDown network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" successfully" Jan 13 21:30:22.341613 containerd[1971]: time="2025-01-13T21:30:22.337679452Z" level=info msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" returns successfully" Jan 13 21:30:22.347364 containerd[1971]: time="2025-01-13T21:30:22.343224273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-nt4jz,Uid:66928d5a-cb4e-4c35-8a71-cae23340ac99,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:30:22.344771 systemd[1]: run-netns-cni\x2da2dba3c4\x2d2f39\x2d4331\x2d19df\x2d99ff67398f94.mount: Deactivated successfully. Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.092 [INFO][4756] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.092 [INFO][4756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" iface="eth0" netns="/var/run/netns/cni-4d8d3679-4567-4ad5-04ca-487fe7ddd595" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.093 [INFO][4756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" iface="eth0" netns="/var/run/netns/cni-4d8d3679-4567-4ad5-04ca-487fe7ddd595" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.093 [INFO][4756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" iface="eth0" netns="/var/run/netns/cni-4d8d3679-4567-4ad5-04ca-487fe7ddd595" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.093 [INFO][4756] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.093 [INFO][4756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.340 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.348 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.348 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.368 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.368 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.373 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:22.385830 containerd[1971]: 2025-01-13 21:30:22.376 [INFO][4756] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:22.385830 containerd[1971]: time="2025-01-13T21:30:22.385705785Z" level=info msg="TearDown network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" successfully" Jan 13 21:30:22.385830 containerd[1971]: time="2025-01-13T21:30:22.385755782Z" level=info msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" returns successfully" Jan 13 21:30:22.388707 containerd[1971]: time="2025-01-13T21:30:22.386977107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trkmh,Uid:85e576e6-d66c-4263-a4ec-9e1bd46d45d0,Namespace:kube-system,Attempt:1,}" Jan 13 21:30:22.402224 systemd[1]: run-netns-cni\x2d4d8d3679\x2d4567\x2d4ad5\x2d04ca\x2d487fe7ddd595.mount: Deactivated successfully. Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.086 [INFO][4747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.087 [INFO][4747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" iface="eth0" netns="/var/run/netns/cni-06388056-7acf-9f5a-0dfc-513d25ab052e" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.088 [INFO][4747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" iface="eth0" netns="/var/run/netns/cni-06388056-7acf-9f5a-0dfc-513d25ab052e" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.089 [INFO][4747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" iface="eth0" netns="/var/run/netns/cni-06388056-7acf-9f5a-0dfc-513d25ab052e" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.089 [INFO][4747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.089 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.364 [INFO][4778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.365 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.374 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.407 [WARNING][4778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.407 [INFO][4778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.415 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:22.444734 containerd[1971]: 2025-01-13 21:30:22.419 [INFO][4747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:22.452821 containerd[1971]: time="2025-01-13T21:30:22.452776006Z" level=info msg="TearDown network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" successfully" Jan 13 21:30:22.453012 containerd[1971]: time="2025-01-13T21:30:22.452991515Z" level=info msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" returns successfully" Jan 13 21:30:22.454633 systemd[1]: run-netns-cni\x2d06388056\x2d7acf\x2d9f5a\x2d0dfc\x2d513d25ab052e.mount: Deactivated successfully. Jan 13 21:30:22.457441 containerd[1971]: time="2025-01-13T21:30:22.454976707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cf758d46-dk7vw,Uid:a863c6f7-9d33-4cc7-acb9-6720fe35112d,Namespace:calico-system,Attempt:1,}" Jan 13 21:30:22.975301 (udev-worker)[4433]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:30:22.981185 systemd-networkd[1894]: caliaabcfc63263: Link UP Jan 13 21:30:22.986193 systemd-networkd[1894]: caliaabcfc63263: Gained carrier Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.648 [INFO][4802] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0 calico-apiserver-6c6b78d879- calico-apiserver 66928d5a-cb4e-4c35-8a71-cae23340ac99 827 0 2025-01-13 21:29:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6b78d879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-253 calico-apiserver-6c6b78d879-nt4jz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaabcfc63263 [] []}} ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.653 [INFO][4802] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.833 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" HandleID="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.854 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" HandleID="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a9000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-253", "pod":"calico-apiserver-6c6b78d879-nt4jz", "timestamp":"2025-01-13 21:30:22.83358639 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.854 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.854 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.854 [INFO][4831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.862 [INFO][4831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.874 [INFO][4831] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.890 [INFO][4831] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.896 [INFO][4831] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.905 [INFO][4831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.905 [INFO][4831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.910 [INFO][4831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.926 [INFO][4831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.66/26] block=192.168.74.64/26 handle="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.66/26] handle="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" host="ip-172-31-18-253" Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:23.054242 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.66/26] IPv6=[] ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" HandleID="k8s-pod-network.212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:22.950 [INFO][4802] cni-plugin/k8s.go 386: Populated endpoint ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"66928d5a-cb4e-4c35-8a71-cae23340ac99", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"calico-apiserver-6c6b78d879-nt4jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaabcfc63263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:22.950 [INFO][4802] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.66/32] ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:22.950 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaabcfc63263 ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:22.986 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:22.993 [INFO][4802] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"66928d5a-cb4e-4c35-8a71-cae23340ac99", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c", Pod:"calico-apiserver-6c6b78d879-nt4jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaabcfc63263", MAC:"22:26:d7:4d:66:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.055436 containerd[1971]: 2025-01-13 21:30:23.046 [INFO][4802] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b78d879-nt4jz" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:23.151719 systemd-networkd[1894]: cali27d41bf8a2e: Link UP Jan 13 21:30:23.165090 systemd-networkd[1894]: cali27d41bf8a2e: Gained carrier Jan 13 21:30:23.184843 containerd[1971]: time="2025-01-13T21:30:23.178802603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:23.184843 containerd[1971]: time="2025-01-13T21:30:23.179289886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:23.184843 containerd[1971]: time="2025-01-13T21:30:23.182463934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.184843 containerd[1971]: time="2025-01-13T21:30:23.182608014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.265914 systemd[1]: Started cri-containerd-212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c.scope - libcontainer container 212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c. Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.740 [INFO][4806] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0 calico-kube-controllers-64cf758d46- calico-system a863c6f7-9d33-4cc7-acb9-6720fe35112d 828 0 2025-01-13 21:29:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64cf758d46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-253 calico-kube-controllers-64cf758d46-dk7vw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali27d41bf8a2e [] []}} ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.742 [INFO][4806] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.874 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.903 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d4080), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-253", "pod":"calico-kube-controllers-64cf758d46-dk7vw", "timestamp":"2025-01-13 21:30:22.87462276 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.903 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.942 [INFO][4840] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:22.971 [INFO][4840] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.029 [INFO][4840] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.048 [INFO][4840] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.056 [INFO][4840] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.064 [INFO][4840] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.065 [INFO][4840] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.068 [INFO][4840] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8 Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.079 [INFO][4840] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4840] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.67/26] block=192.168.74.64/26 handle="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4840] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.67/26] handle="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" host="ip-172-31-18-253" Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:23.294595 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.67/26] IPv6=[] ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.100 [INFO][4806] cni-plugin/k8s.go 386: Populated endpoint ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0", GenerateName:"calico-kube-controllers-64cf758d46-", Namespace:"calico-system", SelfLink:"", UID:"a863c6f7-9d33-4cc7-acb9-6720fe35112d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cf758d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"calico-kube-controllers-64cf758d46-dk7vw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d41bf8a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.104 [INFO][4806] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.67/32] ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.104 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27d41bf8a2e ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.196 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.201 [INFO][4806] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0", GenerateName:"calico-kube-controllers-64cf758d46-", Namespace:"calico-system", SelfLink:"", UID:"a863c6f7-9d33-4cc7-acb9-6720fe35112d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cf758d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8", Pod:"calico-kube-controllers-64cf758d46-dk7vw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d41bf8a2e", MAC:"ce:d4:bc:81:67:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.300728 containerd[1971]: 2025-01-13 21:30:23.274 [INFO][4806] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Namespace="calico-system" Pod="calico-kube-controllers-64cf758d46-dk7vw" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:23.299078 systemd-networkd[1894]: vxlan.calico: Link UP Jan 13 21:30:23.299085 systemd-networkd[1894]: vxlan.calico: Gained carrier Jan 13 21:30:23.387350 systemd-networkd[1894]: cali5cf29fa5ec8: Link UP Jan 13 21:30:23.391271 systemd-networkd[1894]: cali5cf29fa5ec8: Gained carrier Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:22.775 [INFO][4819] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0 coredns-6f6b679f8f- kube-system 85e576e6-d66c-4263-a4ec-9e1bd46d45d0 829 0 2025-01-13 21:29:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-253 coredns-6f6b679f8f-trkmh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cf29fa5ec8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:22.776 [INFO][4819] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:22.929 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" HandleID="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:22.973 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" HandleID="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002116e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-253", "pod":"coredns-6f6b679f8f-trkmh", "timestamp":"2025-01-13 21:30:22.929918484 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:22.974 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.096 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.102 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.117 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.159 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.185 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.209 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.211 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.237 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681 Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.286 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.310 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.68/26] block=192.168.74.64/26 handle="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.311 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.68/26] handle="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" host="ip-172-31-18-253" Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.312 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:23.446839 containerd[1971]: 2025-01-13 21:30:23.312 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.68/26] IPv6=[] ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" HandleID="k8s-pod-network.86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.329 [INFO][4819] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"85e576e6-d66c-4263-a4ec-9e1bd46d45d0", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"coredns-6f6b679f8f-trkmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cf29fa5ec8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.331 [INFO][4819] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.68/32] ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.332 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cf29fa5ec8 ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.394 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.397 [INFO][4819] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"85e576e6-d66c-4263-a4ec-9e1bd46d45d0", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681", Pod:"coredns-6f6b679f8f-trkmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cf29fa5ec8", MAC:"9e:c3:32:ad:b4:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:23.450495 containerd[1971]: 2025-01-13 21:30:23.433 [INFO][4819] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681" Namespace="kube-system" Pod="coredns-6f6b679f8f-trkmh" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:23.499962 containerd[1971]: time="2025-01-13T21:30:23.497202010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:23.499962 containerd[1971]: time="2025-01-13T21:30:23.497420433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:23.499962 containerd[1971]: time="2025-01-13T21:30:23.497445651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.499962 containerd[1971]: time="2025-01-13T21:30:23.497568312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.630984 systemd[1]: Started cri-containerd-998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8.scope - libcontainer container 998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8. Jan 13 21:30:23.700789 containerd[1971]: time="2025-01-13T21:30:23.683012103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:23.700789 containerd[1971]: time="2025-01-13T21:30:23.683093214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:23.700789 containerd[1971]: time="2025-01-13T21:30:23.683142980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.700789 containerd[1971]: time="2025-01-13T21:30:23.683293258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:23.723663 containerd[1971]: time="2025-01-13T21:30:23.721490644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b78d879-nt4jz,Uid:66928d5a-cb4e-4c35-8a71-cae23340ac99,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c\"" Jan 13 21:30:23.778288 systemd[1]: Started cri-containerd-86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681.scope - libcontainer container 86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681. Jan 13 21:30:23.844718 containerd[1971]: time="2025-01-13T21:30:23.844400877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cf758d46-dk7vw,Uid:a863c6f7-9d33-4cc7-acb9-6720fe35112d,Namespace:calico-system,Attempt:1,} returns sandbox id \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\"" Jan 13 21:30:23.957225 containerd[1971]: time="2025-01-13T21:30:23.956764329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-trkmh,Uid:85e576e6-d66c-4263-a4ec-9e1bd46d45d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681\"" Jan 13 21:30:23.971442 containerd[1971]: time="2025-01-13T21:30:23.971002765Z" level=info msg="CreateContainer within sandbox \"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:30:24.018024 containerd[1971]: time="2025-01-13T21:30:24.017964511Z" level=info msg="CreateContainer within sandbox \"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59128185418af00fdb2c0b85452562827a3795a4fdc8d5839da2127f50bce143\"" Jan 13 21:30:24.020369 containerd[1971]: time="2025-01-13T21:30:24.019653456Z" level=info msg="StartContainer for \"59128185418af00fdb2c0b85452562827a3795a4fdc8d5839da2127f50bce143\"" Jan 13 21:30:24.100919 systemd[1]: Started cri-containerd-59128185418af00fdb2c0b85452562827a3795a4fdc8d5839da2127f50bce143.scope - libcontainer container 59128185418af00fdb2c0b85452562827a3795a4fdc8d5839da2127f50bce143. Jan 13 21:30:24.196723 containerd[1971]: time="2025-01-13T21:30:24.194370472Z" level=info msg="StartContainer for \"59128185418af00fdb2c0b85452562827a3795a4fdc8d5839da2127f50bce143\" returns successfully" Jan 13 21:30:24.363586 kubelet[3350]: I0113 21:30:24.362559 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-trkmh" podStartSLOduration=39.362537552 podStartE2EDuration="39.362537552s" podCreationTimestamp="2025-01-13 21:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:24.362150618 +0000 UTC m=+43.831828090" watchObservedRunningTime="2025-01-13 21:30:24.362537552 +0000 UTC m=+43.832215023" Jan 13 21:30:24.371944 systemd-networkd[1894]: vxlan.calico: Gained IPv6LL Jan 13 21:30:24.372326 systemd-networkd[1894]: caliaabcfc63263: Gained IPv6LL Jan 13 21:30:24.810786 containerd[1971]: time="2025-01-13T21:30:24.809630416Z" level=info msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" Jan 13 21:30:24.815082 containerd[1971]: time="2025-01-13T21:30:24.814130239Z" level=info msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" Jan 13 21:30:25.204511 systemd-networkd[1894]: cali27d41bf8a2e: Gained IPv6LL Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.100 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.108 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" iface="eth0" netns="/var/run/netns/cni-9041616d-ca0d-6610-fbc7-d990a2de3c9a" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.110 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" iface="eth0" netns="/var/run/netns/cni-9041616d-ca0d-6610-fbc7-d990a2de3c9a" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.111 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" iface="eth0" netns="/var/run/netns/cni-9041616d-ca0d-6610-fbc7-d990a2de3c9a" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.111 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.111 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.217 [INFO][5127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.217 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.218 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.241 [WARNING][5127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.241 [INFO][5127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.253 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:25.273508 containerd[1971]: 2025-01-13 21:30:25.256 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:25.276874 containerd[1971]: time="2025-01-13T21:30:25.276827858Z" level=info msg="TearDown network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" successfully" Jan 13 21:30:25.276874 containerd[1971]: time="2025-01-13T21:30:25.276873550Z" level=info msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" returns successfully" Jan 13 21:30:25.281762 systemd[1]: run-netns-cni\x2d9041616d\x2dca0d\x2d6610\x2dfbc7\x2dd990a2de3c9a.mount: Deactivated successfully. Jan 13 21:30:25.284614 containerd[1971]: time="2025-01-13T21:30:25.284542870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r85h4,Uid:6df22146-8f07-4f5d-bc45-b3dcbe228775,Namespace:kube-system,Attempt:1,}" Jan 13 21:30:25.332563 systemd-networkd[1894]: cali5cf29fa5ec8: Gained IPv6LL Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.128 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.129 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" iface="eth0" netns="/var/run/netns/cni-8da09fcd-002f-ec72-c6e4-d57f67b008f7" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.130 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" iface="eth0" netns="/var/run/netns/cni-8da09fcd-002f-ec72-c6e4-d57f67b008f7" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.130 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" iface="eth0" netns="/var/run/netns/cni-8da09fcd-002f-ec72-c6e4-d57f67b008f7" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.130 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.130 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.294 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.295 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.295 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.316 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.316 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.329 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:25.368622 containerd[1971]: 2025-01-13 21:30:25.357 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:25.372864 containerd[1971]: time="2025-01-13T21:30:25.372042511Z" level=info msg="TearDown network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" successfully" Jan 13 21:30:25.372864 containerd[1971]: time="2025-01-13T21:30:25.372679328Z" level=info msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" returns successfully" Jan 13 21:30:25.376402 containerd[1971]: time="2025-01-13T21:30:25.376121827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4xcp,Uid:a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4,Namespace:calico-system,Attempt:1,}" Jan 13 21:30:25.383119 systemd[1]: run-netns-cni\x2d8da09fcd\x2d002f\x2dec72\x2dc6e4\x2dd57f67b008f7.mount: Deactivated successfully. Jan 13 21:30:25.837537 (udev-worker)[4991]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:30:25.844963 systemd-networkd[1894]: calife1aa6da20f: Link UP Jan 13 21:30:25.853522 systemd-networkd[1894]: calife1aa6da20f: Gained carrier Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.509 [INFO][5145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0 coredns-6f6b679f8f- kube-system 6df22146-8f07-4f5d-bc45-b3dcbe228775 853 0 2025-01-13 21:29:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-253 coredns-6f6b679f8f-r85h4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife1aa6da20f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.511 [INFO][5145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.659 [INFO][5172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" HandleID="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.683 [INFO][5172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" HandleID="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000509a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-253", "pod":"coredns-6f6b679f8f-r85h4", "timestamp":"2025-01-13 21:30:25.659510214 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.684 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.684 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.684 [INFO][5172] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.697 [INFO][5172] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.716 [INFO][5172] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.735 [INFO][5172] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.750 [INFO][5172] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.762 [INFO][5172] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.763 [INFO][5172] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.771 [INFO][5172] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73 Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.788 [INFO][5172] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.818 [INFO][5172] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.69/26] block=192.168.74.64/26 handle="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.819 [INFO][5172] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.69/26] handle="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" host="ip-172-31-18-253" Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.820 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:25.935342 containerd[1971]: 2025-01-13 21:30:25.820 [INFO][5172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.69/26] IPv6=[] ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" HandleID="k8s-pod-network.3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.827 [INFO][5145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6df22146-8f07-4f5d-bc45-b3dcbe228775", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"coredns-6f6b679f8f-r85h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife1aa6da20f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.830 [INFO][5145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.69/32] ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.830 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife1aa6da20f ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.856 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.866 [INFO][5145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6df22146-8f07-4f5d-bc45-b3dcbe228775", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73", Pod:"coredns-6f6b679f8f-r85h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife1aa6da20f", MAC:"ea:bf:cb:50:51:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:25.943505 containerd[1971]: 2025-01-13 21:30:25.919 [INFO][5145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73" Namespace="kube-system" Pod="coredns-6f6b679f8f-r85h4" WorkloadEndpoint="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:26.071995 containerd[1971]: time="2025-01-13T21:30:26.069926335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:26.071995 containerd[1971]: time="2025-01-13T21:30:26.070063844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:26.071995 containerd[1971]: time="2025-01-13T21:30:26.070100163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:26.071995 containerd[1971]: time="2025-01-13T21:30:26.070344168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:26.135391 systemd[1]: Started cri-containerd-3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73.scope - libcontainer container 3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73. Jan 13 21:30:26.257215 systemd-networkd[1894]: calibfb5e3ba6d5: Link UP Jan 13 21:30:26.257559 systemd-networkd[1894]: calibfb5e3ba6d5: Gained carrier Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:25.694 [INFO][5159] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0 csi-node-driver- calico-system a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4 854 0 2025-01-13 21:29:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-253 csi-node-driver-v4xcp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibfb5e3ba6d5 [] []}} ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:25.695 [INFO][5159] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:25.871 [INFO][5187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" HandleID="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.010 [INFO][5187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" HandleID="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003198c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-253", "pod":"csi-node-driver-v4xcp", "timestamp":"2025-01-13 21:30:25.871219501 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.010 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.011 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.011 [INFO][5187] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.044 [INFO][5187] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.101 [INFO][5187] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.158 [INFO][5187] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.170 [INFO][5187] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.193 [INFO][5187] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.193 [INFO][5187] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.206 [INFO][5187] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32 Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.217 [INFO][5187] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.239 [INFO][5187] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.70/26] block=192.168.74.64/26 handle="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.239 [INFO][5187] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.70/26] handle="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" host="ip-172-31-18-253" Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.239 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:26.306226 containerd[1971]: 2025-01-13 21:30:26.240 [INFO][5187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.70/26] IPv6=[] ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" HandleID="k8s-pod-network.42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.244 [INFO][5159] cni-plugin/k8s.go 386: Populated endpoint ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"csi-node-driver-v4xcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfb5e3ba6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.244 [INFO][5159] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.70/32] ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.244 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfb5e3ba6d5 ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.255 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.255 [INFO][5159] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32", Pod:"csi-node-driver-v4xcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfb5e3ba6d5", MAC:"9a:b5:80:41:b8:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:26.309188 containerd[1971]: 2025-01-13 21:30:26.293 [INFO][5159] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32" Namespace="calico-system" Pod="csi-node-driver-v4xcp" WorkloadEndpoint="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:26.375511 containerd[1971]: time="2025-01-13T21:30:26.375466235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r85h4,Uid:6df22146-8f07-4f5d-bc45-b3dcbe228775,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73\"" Jan 13 21:30:26.384730 containerd[1971]: time="2025-01-13T21:30:26.384686526Z" level=info msg="CreateContainer within sandbox \"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:30:26.392056 containerd[1971]: time="2025-01-13T21:30:26.391848311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:26.396877 containerd[1971]: time="2025-01-13T21:30:26.396758436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:26.396877 containerd[1971]: time="2025-01-13T21:30:26.396793159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:26.400005 containerd[1971]: time="2025-01-13T21:30:26.399778538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:26.456732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891169748.mount: Deactivated successfully. Jan 13 21:30:26.474733 containerd[1971]: time="2025-01-13T21:30:26.473771951Z" level=info msg="CreateContainer within sandbox \"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fd1bad529d0b155969dd7d7d16deb978d9e5f9d51ae4e95e47bbdbafb034274\"" Jan 13 21:30:26.477271 containerd[1971]: time="2025-01-13T21:30:26.477229366Z" level=info msg="StartContainer for \"7fd1bad529d0b155969dd7d7d16deb978d9e5f9d51ae4e95e47bbdbafb034274\"" Jan 13 21:30:26.492953 systemd[1]: Started cri-containerd-42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32.scope - libcontainer container 42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32. Jan 13 21:30:26.617903 systemd[1]: Started cri-containerd-7fd1bad529d0b155969dd7d7d16deb978d9e5f9d51ae4e95e47bbdbafb034274.scope - libcontainer container 7fd1bad529d0b155969dd7d7d16deb978d9e5f9d51ae4e95e47bbdbafb034274. Jan 13 21:30:26.697035 containerd[1971]: time="2025-01-13T21:30:26.696626697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4xcp,Uid:a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4,Namespace:calico-system,Attempt:1,} returns sandbox id \"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32\"" Jan 13 21:30:26.789237 containerd[1971]: time="2025-01-13T21:30:26.789191913Z" level=info msg="StartContainer for \"7fd1bad529d0b155969dd7d7d16deb978d9e5f9d51ae4e95e47bbdbafb034274\" returns successfully" Jan 13 21:30:27.124491 systemd-networkd[1894]: calife1aa6da20f: Gained IPv6LL Jan 13 21:30:27.473174 systemd[1]: Started sshd@9-172.31.18.253:22-147.75.109.163:41304.service - OpenSSH per-connection server daemon (147.75.109.163:41304). Jan 13 21:30:27.771206 sshd[5343]: Accepted publickey for core from 147.75.109.163 port 41304 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:27.776370 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:27.794975 systemd-logind[1953]: New session 10 of user core. Jan 13 21:30:27.800148 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:30:27.874262 containerd[1971]: time="2025-01-13T21:30:27.874198753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:27.876123 containerd[1971]: time="2025-01-13T21:30:27.876016044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:30:27.878162 containerd[1971]: time="2025-01-13T21:30:27.878057746Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:27.881492 containerd[1971]: time="2025-01-13T21:30:27.881432758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:27.882537 containerd[1971]: time="2025-01-13T21:30:27.882493702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 6.337398362s" Jan 13 21:30:27.882537 containerd[1971]: time="2025-01-13T21:30:27.882533180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:30:27.886700 containerd[1971]: time="2025-01-13T21:30:27.886150149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:30:27.887118 containerd[1971]: time="2025-01-13T21:30:27.887057473Z" level=info msg="CreateContainer within sandbox \"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:30:27.932699 containerd[1971]: time="2025-01-13T21:30:27.932176948Z" level=info msg="CreateContainer within sandbox \"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e82c58790cbf65f4e3a1e47e160933ca98474310efec48305bcd005d23305f10\"" Jan 13 21:30:27.935229 containerd[1971]: time="2025-01-13T21:30:27.935174893Z" level=info msg="StartContainer for \"e82c58790cbf65f4e3a1e47e160933ca98474310efec48305bcd005d23305f10\"" Jan 13 21:30:28.024896 systemd[1]: Started cri-containerd-e82c58790cbf65f4e3a1e47e160933ca98474310efec48305bcd005d23305f10.scope - libcontainer container e82c58790cbf65f4e3a1e47e160933ca98474310efec48305bcd005d23305f10. Jan 13 21:30:28.102440 containerd[1971]: time="2025-01-13T21:30:28.101790125Z" level=info msg="StartContainer for \"e82c58790cbf65f4e3a1e47e160933ca98474310efec48305bcd005d23305f10\" returns successfully" Jan 13 21:30:28.147943 systemd-networkd[1894]: calibfb5e3ba6d5: Gained IPv6LL Jan 13 21:30:28.432015 kubelet[3350]: I0113 21:30:28.431949 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-r85h4" podStartSLOduration=43.43192297 podStartE2EDuration="43.43192297s" podCreationTimestamp="2025-01-13 21:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:27.567818476 +0000 UTC m=+47.037495970" watchObservedRunningTime="2025-01-13 21:30:28.43192297 +0000 UTC m=+47.901600443" Jan 13 21:30:28.435557 kubelet[3350]: I0113 21:30:28.432084 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c6b78d879-25mr6" podStartSLOduration=29.089977864 podStartE2EDuration="35.432076384s" podCreationTimestamp="2025-01-13 21:29:53 +0000 UTC" firstStartedPulling="2025-01-13 21:30:21.542133835 +0000 UTC m=+41.011811287" lastFinishedPulling="2025-01-13 21:30:27.884232344 +0000 UTC m=+47.353909807" observedRunningTime="2025-01-13 21:30:28.430747264 +0000 UTC m=+47.900424735" watchObservedRunningTime="2025-01-13 21:30:28.432076384 +0000 UTC m=+47.901753860" Jan 13 21:30:28.473395 containerd[1971]: time="2025-01-13T21:30:28.472858056Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:28.478043 containerd[1971]: time="2025-01-13T21:30:28.477770807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:30:28.488212 containerd[1971]: time="2025-01-13T21:30:28.487842881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 601.651488ms" Jan 13 21:30:28.490235 containerd[1971]: time="2025-01-13T21:30:28.488418931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:30:28.491397 containerd[1971]: time="2025-01-13T21:30:28.491364549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:30:28.494016 containerd[1971]: time="2025-01-13T21:30:28.493975620Z" level=info msg="CreateContainer within sandbox \"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:30:28.543805 containerd[1971]: time="2025-01-13T21:30:28.543282859Z" level=info msg="CreateContainer within sandbox \"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eda463c2cb572e7f031dd46da2041b658142680ca0644711653e3cd92c27bb35\"" Jan 13 21:30:28.548066 containerd[1971]: time="2025-01-13T21:30:28.546774388Z" level=info msg="StartContainer for \"eda463c2cb572e7f031dd46da2041b658142680ca0644711653e3cd92c27bb35\"" Jan 13 21:30:28.686900 systemd[1]: Started cri-containerd-eda463c2cb572e7f031dd46da2041b658142680ca0644711653e3cd92c27bb35.scope - libcontainer container eda463c2cb572e7f031dd46da2041b658142680ca0644711653e3cd92c27bb35. Jan 13 21:30:28.912606 containerd[1971]: time="2025-01-13T21:30:28.912554765Z" level=info msg="StartContainer for \"eda463c2cb572e7f031dd46da2041b658142680ca0644711653e3cd92c27bb35\" returns successfully" Jan 13 21:30:29.107917 sshd[5343]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:29.117836 systemd-logind[1953]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:30:29.119277 systemd[1]: sshd@9-172.31.18.253:22-147.75.109.163:41304.service: Deactivated successfully. Jan 13 21:30:29.126096 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:30:29.134710 systemd-logind[1953]: Removed session 10. Jan 13 21:30:29.405718 kubelet[3350]: I0113 21:30:29.404666 3350 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:31.079606 ntpd[1948]: Listen normally on 8 vxlan.calico 192.168.74.64:123 Jan 13 21:30:31.079937 ntpd[1948]: Listen normally on 9 cali757e7536d21 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 8 vxlan.calico 192.168.74.64:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 9 cali757e7536d21 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 10 caliaabcfc63263 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 11 cali27d41bf8a2e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 12 vxlan.calico [fe80::6405:3bff:fe45:d635%7]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 13 cali5cf29fa5ec8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 14 calife1aa6da20f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:30:31.081069 ntpd[1948]: 13 Jan 21:30:31 ntpd[1948]: Listen normally on 15 calibfb5e3ba6d5 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:30:31.079999 ntpd[1948]: Listen normally on 10 caliaabcfc63263 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 21:30:31.080039 ntpd[1948]: Listen normally on 11 cali27d41bf8a2e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 21:30:31.080079 ntpd[1948]: Listen normally on 12 vxlan.calico [fe80::6405:3bff:fe45:d635%7]:123 Jan 13 21:30:31.080120 ntpd[1948]: Listen normally on 13 cali5cf29fa5ec8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:30:31.080167 ntpd[1948]: Listen normally on 14 calife1aa6da20f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:30:31.080204 ntpd[1948]: Listen normally on 15 calibfb5e3ba6d5 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:30:31.427897 kubelet[3350]: I0113 21:30:31.427825 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c6b78d879-nt4jz" podStartSLOduration=33.672426423 podStartE2EDuration="38.427802611s" podCreationTimestamp="2025-01-13 21:29:53 +0000 UTC" firstStartedPulling="2025-01-13 21:30:23.73414463 +0000 UTC m=+43.203822096" lastFinishedPulling="2025-01-13 21:30:28.489520822 +0000 UTC m=+47.959198284" observedRunningTime="2025-01-13 21:30:29.488211493 +0000 UTC m=+48.957888966" watchObservedRunningTime="2025-01-13 21:30:31.427802611 +0000 UTC m=+50.897480084" Jan 13 21:30:32.232172 containerd[1971]: time="2025-01-13T21:30:32.232118589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.233960 containerd[1971]: time="2025-01-13T21:30:32.233743720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:30:32.236985 containerd[1971]: time="2025-01-13T21:30:32.235444527Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.239072 containerd[1971]: time="2025-01-13T21:30:32.238240215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.239072 containerd[1971]: time="2025-01-13T21:30:32.238930309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.747520194s" Jan 13 21:30:32.239072 containerd[1971]: time="2025-01-13T21:30:32.238969202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:30:32.241792 containerd[1971]: time="2025-01-13T21:30:32.241759571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:30:32.310568 containerd[1971]: time="2025-01-13T21:30:32.309953035Z" level=info msg="CreateContainer within sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:30:32.349142 containerd[1971]: time="2025-01-13T21:30:32.349090367Z" level=info msg="CreateContainer within sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\"" Jan 13 21:30:32.349449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103317730.mount: Deactivated successfully. Jan 13 21:30:32.352923 containerd[1971]: time="2025-01-13T21:30:32.352322336Z" level=info msg="StartContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\"" Jan 13 21:30:32.406999 systemd[1]: Started cri-containerd-69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193.scope - libcontainer container 69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193. Jan 13 21:30:32.515949 containerd[1971]: time="2025-01-13T21:30:32.515694733Z" level=info msg="StartContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" returns successfully" Jan 13 21:30:33.233435 kubelet[3350]: I0113 21:30:33.231402 3350 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:34.149234 systemd[1]: Started sshd@10-172.31.18.253:22-147.75.109.163:41318.service - OpenSSH per-connection server daemon (147.75.109.163:41318). Jan 13 21:30:34.376486 containerd[1971]: time="2025-01-13T21:30:34.376437624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.377577 containerd[1971]: time="2025-01-13T21:30:34.377461866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:30:34.380039 containerd[1971]: time="2025-01-13T21:30:34.378693882Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.382097 containerd[1971]: time="2025-01-13T21:30:34.381135580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:34.382097 containerd[1971]: time="2025-01-13T21:30:34.381843247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.140044888s" Jan 13 21:30:34.382097 containerd[1971]: time="2025-01-13T21:30:34.381877487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:30:34.393253 containerd[1971]: time="2025-01-13T21:30:34.393215159Z" level=info msg="CreateContainer within sandbox \"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:30:34.416187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861794102.mount: Deactivated successfully. Jan 13 21:30:34.418162 containerd[1971]: time="2025-01-13T21:30:34.418082929Z" level=info msg="CreateContainer within sandbox \"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7f156f6806177d25c9c0e5a8730bf327c9a273447dfc39c73dcd36f570630133\"" Jan 13 21:30:34.422534 containerd[1971]: time="2025-01-13T21:30:34.419327190Z" level=info msg="StartContainer for \"7f156f6806177d25c9c0e5a8730bf327c9a273447dfc39c73dcd36f570630133\"" Jan 13 21:30:34.424690 sshd[5535]: Accepted publickey for core from 147.75.109.163 port 41318 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:34.427875 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:34.458547 systemd-logind[1953]: New session 11 of user core. Jan 13 21:30:34.464870 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:30:34.571133 systemd[1]: Started cri-containerd-7f156f6806177d25c9c0e5a8730bf327c9a273447dfc39c73dcd36f570630133.scope - libcontainer container 7f156f6806177d25c9c0e5a8730bf327c9a273447dfc39c73dcd36f570630133. Jan 13 21:30:34.738024 containerd[1971]: time="2025-01-13T21:30:34.736078901Z" level=info msg="StartContainer for \"7f156f6806177d25c9c0e5a8730bf327c9a273447dfc39c73dcd36f570630133\" returns successfully" Jan 13 21:30:34.741086 containerd[1971]: time="2025-01-13T21:30:34.741037777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:30:34.819733 kubelet[3350]: I0113 21:30:34.819509 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64cf758d46-dk7vw" podStartSLOduration=34.43057331 podStartE2EDuration="42.819487334s" podCreationTimestamp="2025-01-13 21:29:52 +0000 UTC" firstStartedPulling="2025-01-13 21:30:23.852322268 +0000 UTC m=+43.321999733" lastFinishedPulling="2025-01-13 21:30:32.241236293 +0000 UTC m=+51.710913757" observedRunningTime="2025-01-13 21:30:33.46447024 +0000 UTC m=+52.934147710" watchObservedRunningTime="2025-01-13 21:30:34.819487334 +0000 UTC m=+54.289164805" Jan 13 21:30:35.328869 sshd[5535]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:35.334269 systemd-logind[1953]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:30:35.335171 systemd[1]: sshd@10-172.31.18.253:22-147.75.109.163:41318.service: Deactivated successfully. Jan 13 21:30:35.337834 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:30:35.339608 systemd-logind[1953]: Removed session 11. Jan 13 21:30:35.405971 systemd[1]: run-containerd-runc-k8s.io-69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193-runc.hj9NS0.mount: Deactivated successfully. Jan 13 21:30:36.808702 containerd[1971]: time="2025-01-13T21:30:36.807907713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:36.812773 containerd[1971]: time="2025-01-13T21:30:36.812680103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:30:36.819861 containerd[1971]: time="2025-01-13T21:30:36.819804330Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:36.849801 containerd[1971]: time="2025-01-13T21:30:36.849747348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:36.853114 containerd[1971]: time="2025-01-13T21:30:36.852879208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.111793572s" Jan 13 21:30:36.853114 containerd[1971]: time="2025-01-13T21:30:36.852962012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:30:36.858810 containerd[1971]: time="2025-01-13T21:30:36.857846841Z" level=info msg="CreateContainer within sandbox \"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:30:36.917883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216641141.mount: Deactivated successfully. Jan 13 21:30:36.922219 containerd[1971]: time="2025-01-13T21:30:36.921139561Z" level=info msg="CreateContainer within sandbox \"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d5fbd0cadc9abad61639c5eb907dfaa831045a16eb8f93d7e7d3d5cc0e5330a9\"" Jan 13 21:30:36.922392 containerd[1971]: time="2025-01-13T21:30:36.922297668Z" level=info msg="StartContainer for \"d5fbd0cadc9abad61639c5eb907dfaa831045a16eb8f93d7e7d3d5cc0e5330a9\"" Jan 13 21:30:37.016875 systemd[1]: Started cri-containerd-d5fbd0cadc9abad61639c5eb907dfaa831045a16eb8f93d7e7d3d5cc0e5330a9.scope - libcontainer container d5fbd0cadc9abad61639c5eb907dfaa831045a16eb8f93d7e7d3d5cc0e5330a9. Jan 13 21:30:37.088058 containerd[1971]: time="2025-01-13T21:30:37.087553318Z" level=info msg="StartContainer for \"d5fbd0cadc9abad61639c5eb907dfaa831045a16eb8f93d7e7d3d5cc0e5330a9\" returns successfully" Jan 13 21:30:38.369339 kubelet[3350]: I0113 21:30:38.369292 3350 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:30:38.394227 kubelet[3350]: I0113 21:30:38.394136 3350 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:30:38.710624 kubelet[3350]: I0113 21:30:38.710494 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v4xcp" podStartSLOduration=36.559695353 podStartE2EDuration="46.710469418s" podCreationTimestamp="2025-01-13 21:29:52 +0000 UTC" firstStartedPulling="2025-01-13 21:30:26.704600554 +0000 UTC m=+46.174278025" lastFinishedPulling="2025-01-13 21:30:36.855374627 +0000 UTC m=+56.325052090" observedRunningTime="2025-01-13 21:30:37.66785393 +0000 UTC m=+57.137531405" watchObservedRunningTime="2025-01-13 21:30:38.710469418 +0000 UTC m=+58.180146891" Jan 13 21:30:40.366961 systemd[1]: Started sshd@11-172.31.18.253:22-147.75.109.163:37794.service - OpenSSH per-connection server daemon (147.75.109.163:37794). Jan 13 21:30:40.590062 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 37794 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:40.592669 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:40.621673 systemd-logind[1953]: New session 12 of user core. Jan 13 21:30:40.633891 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:30:40.967670 containerd[1971]: time="2025-01-13T21:30:40.967122895Z" level=info msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" Jan 13 21:30:41.171253 sshd[5663]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:41.181957 systemd-logind[1953]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:30:41.182720 systemd[1]: sshd@11-172.31.18.253:22-147.75.109.163:37794.service: Deactivated successfully. Jan 13 21:30:41.187542 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:30:41.189699 systemd-logind[1953]: Removed session 12. Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.307 [WARNING][5688] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"66928d5a-cb4e-4c35-8a71-cae23340ac99", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c", Pod:"calico-apiserver-6c6b78d879-nt4jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaabcfc63263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.315 [INFO][5688] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.316 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" iface="eth0" netns="" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.316 [INFO][5688] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.316 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.351 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.351 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.351 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.359 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.359 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.366 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:41.374356 containerd[1971]: 2025-01-13 21:30:41.370 [INFO][5688] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.374356 containerd[1971]: time="2025-01-13T21:30:41.374173067Z" level=info msg="TearDown network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" successfully" Jan 13 21:30:41.376950 containerd[1971]: time="2025-01-13T21:30:41.374496352Z" level=info msg="StopPodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" returns successfully" Jan 13 21:30:41.399523 containerd[1971]: time="2025-01-13T21:30:41.399460698Z" level=info msg="RemovePodSandbox for \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" Jan 13 21:30:41.399523 containerd[1971]: time="2025-01-13T21:30:41.399523903Z" level=info msg="Forcibly stopping sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\"" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.477 [WARNING][5715] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"66928d5a-cb4e-4c35-8a71-cae23340ac99", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"212b7473a7265d55a524e8ef73cbbf08018f8f36007afd4929e759801901267c", Pod:"calico-apiserver-6c6b78d879-nt4jz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaabcfc63263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.478 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.478 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" iface="eth0" netns="" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.478 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.478 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.533 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.533 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.534 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.552 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.552 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" HandleID="k8s-pod-network.929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--nt4jz-eth0" Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.555 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:41.565785 containerd[1971]: 2025-01-13 21:30:41.558 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd" Jan 13 21:30:41.567501 containerd[1971]: time="2025-01-13T21:30:41.567455877Z" level=info msg="TearDown network for sandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" successfully" Jan 13 21:30:41.581409 containerd[1971]: time="2025-01-13T21:30:41.581329334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:41.594118 containerd[1971]: time="2025-01-13T21:30:41.594055655Z" level=info msg="RemovePodSandbox \"929bea822332316b5614df87b4a92b2aefafb4bbe1698e41aa6e3c803bcbfcfd\" returns successfully" Jan 13 21:30:41.601803 containerd[1971]: time="2025-01-13T21:30:41.601765286Z" level=info msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.658 [WARNING][5739] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0", GenerateName:"calico-kube-controllers-64cf758d46-", Namespace:"calico-system", SelfLink:"", UID:"a863c6f7-9d33-4cc7-acb9-6720fe35112d", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cf758d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8", Pod:"calico-kube-controllers-64cf758d46-dk7vw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d41bf8a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.658 [INFO][5739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.660 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" iface="eth0" netns="" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.660 [INFO][5739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.660 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.692 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.692 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.692 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.699 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.699 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.701 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:41.705708 containerd[1971]: 2025-01-13 21:30:41.703 [INFO][5739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.706603 containerd[1971]: time="2025-01-13T21:30:41.705760437Z" level=info msg="TearDown network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" successfully" Jan 13 21:30:41.706603 containerd[1971]: time="2025-01-13T21:30:41.705797391Z" level=info msg="StopPodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" returns successfully" Jan 13 21:30:41.706603 containerd[1971]: time="2025-01-13T21:30:41.706588975Z" level=info msg="RemovePodSandbox for \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" Jan 13 21:30:41.706822 containerd[1971]: time="2025-01-13T21:30:41.706622738Z" level=info msg="Forcibly stopping sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\"" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.792 [WARNING][5764] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0", GenerateName:"calico-kube-controllers-64cf758d46-", Namespace:"calico-system", SelfLink:"", UID:"a863c6f7-9d33-4cc7-acb9-6720fe35112d", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cf758d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8", Pod:"calico-kube-controllers-64cf758d46-dk7vw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d41bf8a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.793 [INFO][5764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.793 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" iface="eth0" netns="" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.793 [INFO][5764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.793 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.835 [INFO][5771] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.835 [INFO][5771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.836 [INFO][5771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.844 [WARNING][5771] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.845 [INFO][5771] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" HandleID="k8s-pod-network.dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.847 [INFO][5771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:41.851119 containerd[1971]: 2025-01-13 21:30:41.849 [INFO][5764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77" Jan 13 21:30:41.852079 containerd[1971]: time="2025-01-13T21:30:41.851158987Z" level=info msg="TearDown network for sandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" successfully" Jan 13 21:30:41.857952 containerd[1971]: time="2025-01-13T21:30:41.857890481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:41.858115 containerd[1971]: time="2025-01-13T21:30:41.857966227Z" level=info msg="RemovePodSandbox \"dc2e9b8bef294b5c2a74ca03298aca7d1576eb44665bc1f033d0295691bb3d77\" returns successfully" Jan 13 21:30:41.858738 containerd[1971]: time="2025-01-13T21:30:41.858708644Z" level=info msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.912 [WARNING][5790] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32", Pod:"csi-node-driver-v4xcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfb5e3ba6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.912 [INFO][5790] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.912 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" iface="eth0" netns="" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.913 [INFO][5790] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.913 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.945 [INFO][5796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.945 [INFO][5796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.945 [INFO][5796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.953 [WARNING][5796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.953 [INFO][5796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.957 [INFO][5796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:41.963100 containerd[1971]: 2025-01-13 21:30:41.960 [INFO][5790] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:41.963100 containerd[1971]: time="2025-01-13T21:30:41.962712415Z" level=info msg="TearDown network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" successfully" Jan 13 21:30:41.963100 containerd[1971]: time="2025-01-13T21:30:41.962937897Z" level=info msg="StopPodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" returns successfully" Jan 13 21:30:41.966560 containerd[1971]: time="2025-01-13T21:30:41.965691445Z" level=info msg="RemovePodSandbox for \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" Jan 13 21:30:41.966560 containerd[1971]: time="2025-01-13T21:30:41.965759001Z" level=info msg="Forcibly stopping sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\"" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.075 [WARNING][5814] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6e9e58c-0aa3-40c9-acb9-ed2d79b35ed4", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"42a375df93afb4c543044f3eeb2af76cddf00567bc52bdcb60e714acd624fe32", Pod:"csi-node-driver-v4xcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfb5e3ba6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.076 [INFO][5814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.076 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" iface="eth0" netns="" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.076 [INFO][5814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.076 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.140 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.140 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.140 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.150 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.150 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" HandleID="k8s-pod-network.971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Workload="ip--172--31--18--253-k8s-csi--node--driver--v4xcp-eth0" Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.152 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.159372 containerd[1971]: 2025-01-13 21:30:42.154 [INFO][5814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897" Jan 13 21:30:42.159372 containerd[1971]: time="2025-01-13T21:30:42.157910998Z" level=info msg="TearDown network for sandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" successfully" Jan 13 21:30:42.164482 containerd[1971]: time="2025-01-13T21:30:42.164437250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:42.164662 containerd[1971]: time="2025-01-13T21:30:42.164518936Z" level=info msg="RemovePodSandbox \"971a826edf50a41446fbb50f560de3d64efa021b52b31aa573932e5a54ba3897\" returns successfully" Jan 13 21:30:42.165156 containerd[1971]: time="2025-01-13T21:30:42.165129348Z" level=info msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.225 [WARNING][5839] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6df22146-8f07-4f5d-bc45-b3dcbe228775", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73", Pod:"coredns-6f6b679f8f-r85h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife1aa6da20f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.226 [INFO][5839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.226 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" iface="eth0" netns="" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.226 [INFO][5839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.226 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.268 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.268 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.268 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.279 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.279 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.282 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.288701 containerd[1971]: 2025-01-13 21:30:42.286 [INFO][5839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.288701 containerd[1971]: time="2025-01-13T21:30:42.288667731Z" level=info msg="TearDown network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" successfully" Jan 13 21:30:42.288701 containerd[1971]: time="2025-01-13T21:30:42.288701816Z" level=info msg="StopPodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" returns successfully" Jan 13 21:30:42.292236 containerd[1971]: time="2025-01-13T21:30:42.291132272Z" level=info msg="RemovePodSandbox for \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" Jan 13 21:30:42.292236 containerd[1971]: time="2025-01-13T21:30:42.291178016Z" level=info msg="Forcibly stopping sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\"" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.392 [WARNING][5864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6df22146-8f07-4f5d-bc45-b3dcbe228775", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"3ac3fe92e530bd907e3a53a7c72995f111c2a1cc8db5a4ab9f85396d147dae73", Pod:"coredns-6f6b679f8f-r85h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife1aa6da20f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.393 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.393 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" iface="eth0" netns="" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.395 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.395 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.474 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.474 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.474 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.482 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.482 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" HandleID="k8s-pod-network.3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--r85h4-eth0" Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.484 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.488373 containerd[1971]: 2025-01-13 21:30:42.486 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b" Jan 13 21:30:42.489266 containerd[1971]: time="2025-01-13T21:30:42.488412329Z" level=info msg="TearDown network for sandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" successfully" Jan 13 21:30:42.495317 containerd[1971]: time="2025-01-13T21:30:42.495138636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:42.495317 containerd[1971]: time="2025-01-13T21:30:42.495217810Z" level=info msg="RemovePodSandbox \"3224161c666056b839de2a88ca2f0617571dd0fad7f3f58080ec7b91433f596b\" returns successfully" Jan 13 21:30:42.495857 containerd[1971]: time="2025-01-13T21:30:42.495832544Z" level=info msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.542 [WARNING][5888] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"85e576e6-d66c-4263-a4ec-9e1bd46d45d0", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681", Pod:"coredns-6f6b679f8f-trkmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cf29fa5ec8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.542 [INFO][5888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.542 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" iface="eth0" netns="" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.542 [INFO][5888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.542 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.571 [INFO][5895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.571 [INFO][5895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.571 [INFO][5895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.580 [WARNING][5895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.580 [INFO][5895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.583 [INFO][5895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.588274 containerd[1971]: 2025-01-13 21:30:42.585 [INFO][5888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.588274 containerd[1971]: time="2025-01-13T21:30:42.587159341Z" level=info msg="TearDown network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" successfully" Jan 13 21:30:42.588274 containerd[1971]: time="2025-01-13T21:30:42.587192156Z" level=info msg="StopPodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" returns successfully" Jan 13 21:30:42.588274 containerd[1971]: time="2025-01-13T21:30:42.588088957Z" level=info msg="RemovePodSandbox for \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" Jan 13 21:30:42.588274 containerd[1971]: time="2025-01-13T21:30:42.588122972Z" level=info msg="Forcibly stopping sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\"" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.672 [WARNING][5913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"85e576e6-d66c-4263-a4ec-9e1bd46d45d0", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"86faffa5643ef0d02fb9d904e64d52ba56fb5c6243bdfcd3df042b89cd8d4681", Pod:"coredns-6f6b679f8f-trkmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cf29fa5ec8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.676 [INFO][5913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.676 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" iface="eth0" netns="" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.676 [INFO][5913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.676 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.718 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.718 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.718 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.732 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.733 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" HandleID="k8s-pod-network.75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Workload="ip--172--31--18--253-k8s-coredns--6f6b679f8f--trkmh-eth0" Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.736 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.744377 containerd[1971]: 2025-01-13 21:30:42.740 [INFO][5913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0" Jan 13 21:30:42.744377 containerd[1971]: time="2025-01-13T21:30:42.743029076Z" level=info msg="TearDown network for sandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" successfully" Jan 13 21:30:42.749608 containerd[1971]: time="2025-01-13T21:30:42.749545398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:42.749805 containerd[1971]: time="2025-01-13T21:30:42.749626010Z" level=info msg="RemovePodSandbox \"75789815a27acdd4bdd5ee768fda7d18d4d61d2c3846bf360428d70ab7c0b8f0\" returns successfully" Jan 13 21:30:42.754608 containerd[1971]: time="2025-01-13T21:30:42.752982730Z" level=info msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.837 [WARNING][5937] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503", Pod:"calico-apiserver-6c6b78d879-25mr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757e7536d21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.837 [INFO][5937] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.837 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" iface="eth0" netns="" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.837 [INFO][5937] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.837 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.867 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.867 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.867 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.877 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.877 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.880 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.884063 containerd[1971]: 2025-01-13 21:30:42.881 [INFO][5937] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.886172 containerd[1971]: time="2025-01-13T21:30:42.884059851Z" level=info msg="TearDown network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" successfully" Jan 13 21:30:42.886172 containerd[1971]: time="2025-01-13T21:30:42.884203834Z" level=info msg="StopPodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" returns successfully" Jan 13 21:30:42.886445 containerd[1971]: time="2025-01-13T21:30:42.886413730Z" level=info msg="RemovePodSandbox for \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" Jan 13 21:30:42.886529 containerd[1971]: time="2025-01-13T21:30:42.886477678Z" level=info msg="Forcibly stopping sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\"" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.938 [WARNING][5962] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0", GenerateName:"calico-apiserver-6c6b78d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd1f0189-02c3-4f32-9cfa-9e41e4d3764b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b78d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"85ec6f66940567ba6888b417246a8550755aa20dee9daf0aaf2740cc3270c503", Pod:"calico-apiserver-6c6b78d879-25mr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757e7536d21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.938 [INFO][5962] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.938 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" iface="eth0" netns="" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.938 [INFO][5962] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.938 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.975 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.976 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.976 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.989 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.989 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" HandleID="k8s-pod-network.629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Workload="ip--172--31--18--253-k8s-calico--apiserver--6c6b78d879--25mr6-eth0" Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.992 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:42.998187 containerd[1971]: 2025-01-13 21:30:42.995 [INFO][5962] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775" Jan 13 21:30:42.999335 containerd[1971]: time="2025-01-13T21:30:42.998235848Z" level=info msg="TearDown network for sandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" successfully" Jan 13 21:30:43.012327 containerd[1971]: time="2025-01-13T21:30:43.009341887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:43.012327 containerd[1971]: time="2025-01-13T21:30:43.009465472Z" level=info msg="RemovePodSandbox \"629542e83b9d10c7e9ef03181fb52ee77e15de360d3ad1e10d5159e02e976775\" returns successfully" Jan 13 21:30:46.215156 systemd[1]: Started sshd@12-172.31.18.253:22-147.75.109.163:37796.service - OpenSSH per-connection server daemon (147.75.109.163:37796). Jan 13 21:30:46.436232 sshd[5981]: Accepted publickey for core from 147.75.109.163 port 37796 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:46.437112 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:46.445468 systemd-logind[1953]: New session 13 of user core. Jan 13 21:30:46.459134 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:30:46.834508 sshd[5981]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:46.840902 systemd[1]: sshd@12-172.31.18.253:22-147.75.109.163:37796.service: Deactivated successfully. Jan 13 21:30:46.848083 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:30:46.849346 systemd-logind[1953]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:30:46.851131 systemd-logind[1953]: Removed session 13. Jan 13 21:30:46.872228 systemd[1]: Started sshd@13-172.31.18.253:22-147.75.109.163:37802.service - OpenSSH per-connection server daemon (147.75.109.163:37802). Jan 13 21:30:47.054397 sshd[5996]: Accepted publickey for core from 147.75.109.163 port 37802 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:47.056785 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:47.065900 systemd-logind[1953]: New session 14 of user core. Jan 13 21:30:47.074946 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:30:47.464450 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:47.473078 systemd-logind[1953]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:30:47.473485 systemd[1]: sshd@13-172.31.18.253:22-147.75.109.163:37802.service: Deactivated successfully. Jan 13 21:30:47.481010 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:30:47.503223 systemd-logind[1953]: Removed session 14. Jan 13 21:30:47.516116 systemd[1]: Started sshd@14-172.31.18.253:22-147.75.109.163:47078.service - OpenSSH per-connection server daemon (147.75.109.163:47078). Jan 13 21:30:47.691761 sshd[6007]: Accepted publickey for core from 147.75.109.163 port 47078 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:47.694525 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:47.710047 systemd-logind[1953]: New session 15 of user core. Jan 13 21:30:47.715109 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:30:47.998626 sshd[6007]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:48.004340 systemd[1]: sshd@14-172.31.18.253:22-147.75.109.163:47078.service: Deactivated successfully. Jan 13 21:30:48.008192 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:30:48.011072 systemd-logind[1953]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:30:48.012832 systemd-logind[1953]: Removed session 15. Jan 13 21:30:49.451851 systemd[1]: run-containerd-runc-k8s.io-69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193-runc.7gv3sv.mount: Deactivated successfully. Jan 13 21:30:53.052837 systemd[1]: Started sshd@15-172.31.18.253:22-147.75.109.163:47086.service - OpenSSH per-connection server daemon (147.75.109.163:47086). Jan 13 21:30:53.315004 sshd[6063]: Accepted publickey for core from 147.75.109.163 port 47086 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:53.318062 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:53.327021 systemd-logind[1953]: New session 16 of user core. Jan 13 21:30:53.331858 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:30:53.927029 sshd[6063]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:53.936314 systemd-logind[1953]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:30:53.939674 systemd[1]: sshd@15-172.31.18.253:22-147.75.109.163:47086.service: Deactivated successfully. Jan 13 21:30:53.949894 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:30:53.958753 systemd-logind[1953]: Removed session 16. Jan 13 21:30:54.007628 containerd[1971]: time="2025-01-13T21:30:54.007578274Z" level=info msg="StopContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" with timeout 300 (s)" Jan 13 21:30:54.012464 containerd[1971]: time="2025-01-13T21:30:54.012179393Z" level=info msg="Stop container \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" with signal terminated" Jan 13 21:30:54.256021 containerd[1971]: time="2025-01-13T21:30:54.255885391Z" level=info msg="StopContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" with timeout 30 (s)" Jan 13 21:30:54.256501 containerd[1971]: time="2025-01-13T21:30:54.256467294Z" level=info msg="Stop container \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" with signal terminated" Jan 13 21:30:54.306741 systemd[1]: cri-containerd-69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193.scope: Deactivated successfully. Jan 13 21:30:54.360080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193-rootfs.mount: Deactivated successfully. Jan 13 21:30:54.368791 containerd[1971]: time="2025-01-13T21:30:54.350002959Z" level=info msg="shim disconnected" id=69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193 namespace=k8s.io Jan 13 21:30:54.388692 containerd[1971]: time="2025-01-13T21:30:54.388479903Z" level=warning msg="cleaning up after shim disconnected" id=69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193 namespace=k8s.io Jan 13 21:30:54.388692 containerd[1971]: time="2025-01-13T21:30:54.388516868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:54.454611 containerd[1971]: time="2025-01-13T21:30:54.454567600Z" level=info msg="StopContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" returns successfully" Jan 13 21:30:54.480538 containerd[1971]: time="2025-01-13T21:30:54.479576873Z" level=info msg="StopPodSandbox for \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\"" Jan 13 21:30:54.496244 containerd[1971]: time="2025-01-13T21:30:54.496178975Z" level=info msg="Container to stop \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:54.507732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8-shm.mount: Deactivated successfully. Jan 13 21:30:54.514946 systemd[1]: cri-containerd-998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8.scope: Deactivated successfully. Jan 13 21:30:54.549369 containerd[1971]: time="2025-01-13T21:30:54.549111779Z" level=info msg="shim disconnected" id=998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8 namespace=k8s.io Jan 13 21:30:54.549369 containerd[1971]: time="2025-01-13T21:30:54.549301408Z" level=warning msg="cleaning up after shim disconnected" id=998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8 namespace=k8s.io Jan 13 21:30:54.549369 containerd[1971]: time="2025-01-13T21:30:54.549319980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:54.557877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8-rootfs.mount: Deactivated successfully. Jan 13 21:30:54.740182 kubelet[3350]: I0113 21:30:54.740131 3350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:30:54.763024 systemd-networkd[1894]: cali27d41bf8a2e: Link DOWN Jan 13 21:30:54.763034 systemd-networkd[1894]: cali27d41bf8a2e: Lost carrier Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.758 [INFO][6158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.761 [INFO][6158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" iface="eth0" netns="/var/run/netns/cni-47d47079-84ad-0bf0-e335-7bbc398fede5" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.761 [INFO][6158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" iface="eth0" netns="/var/run/netns/cni-47d47079-84ad-0bf0-e335-7bbc398fede5" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.779 [INFO][6158] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" after=18.673819ms iface="eth0" netns="/var/run/netns/cni-47d47079-84ad-0bf0-e335-7bbc398fede5" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.779 [INFO][6158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.779 [INFO][6158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.874 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.876 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.876 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.989 [INFO][6166] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.989 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.996 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:55.002440 containerd[1971]: 2025-01-13 21:30:54.999 [INFO][6158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:30:55.006862 containerd[1971]: time="2025-01-13T21:30:55.006799823Z" level=info msg="TearDown network for sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" successfully" Jan 13 21:30:55.006862 containerd[1971]: time="2025-01-13T21:30:55.006858563Z" level=info msg="StopPodSandbox for \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" returns successfully" Jan 13 21:30:55.014246 systemd[1]: run-netns-cni\x2d47d47079\x2d84ad\x2d0bf0\x2de335\x2d7bbc398fede5.mount: Deactivated successfully. Jan 13 21:30:55.158941 kubelet[3350]: I0113 21:30:55.156069 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a863c6f7-9d33-4cc7-acb9-6720fe35112d-tigera-ca-bundle\") pod \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\" (UID: \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\") " Jan 13 21:30:55.160168 kubelet[3350]: I0113 21:30:55.159136 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vc7r\" (UniqueName: \"kubernetes.io/projected/a863c6f7-9d33-4cc7-acb9-6720fe35112d-kube-api-access-4vc7r\") pod \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\" (UID: \"a863c6f7-9d33-4cc7-acb9-6720fe35112d\") " Jan 13 21:30:55.166527 kubelet[3350]: I0113 21:30:55.166465 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a863c6f7-9d33-4cc7-acb9-6720fe35112d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a863c6f7-9d33-4cc7-acb9-6720fe35112d" (UID: "a863c6f7-9d33-4cc7-acb9-6720fe35112d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:30:55.167507 kubelet[3350]: I0113 21:30:55.167468 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a863c6f7-9d33-4cc7-acb9-6720fe35112d-kube-api-access-4vc7r" (OuterVolumeSpecName: "kube-api-access-4vc7r") pod "a863c6f7-9d33-4cc7-acb9-6720fe35112d" (UID: "a863c6f7-9d33-4cc7-acb9-6720fe35112d"). InnerVolumeSpecName "kube-api-access-4vc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:30:55.168214 systemd[1]: var-lib-kubelet-pods-a863c6f7\x2d9d33\x2d4cc7\x2dacb9\x2d6720fe35112d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 13 21:30:55.262049 kubelet[3350]: I0113 21:30:55.262006 3350 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a863c6f7-9d33-4cc7-acb9-6720fe35112d-tigera-ca-bundle\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:30:55.262049 kubelet[3350]: I0113 21:30:55.262047 3350 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4vc7r\" (UniqueName: \"kubernetes.io/projected/a863c6f7-9d33-4cc7-acb9-6720fe35112d-kube-api-access-4vc7r\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:30:55.350537 systemd[1]: var-lib-kubelet-pods-a863c6f7\x2d9d33\x2d4cc7\x2dacb9\x2d6720fe35112d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vc7r.mount: Deactivated successfully. Jan 13 21:30:55.770257 systemd[1]: Removed slice kubepods-besteffort-poda863c6f7_9d33_4cc7_acb9_6720fe35112d.slice - libcontainer container kubepods-besteffort-poda863c6f7_9d33_4cc7_acb9_6720fe35112d.slice. Jan 13 21:30:55.876374 kubelet[3350]: E0113 21:30:55.876255 3350 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a863c6f7-9d33-4cc7-acb9-6720fe35112d" containerName="calico-kube-controllers" Jan 13 21:30:55.877333 kubelet[3350]: I0113 21:30:55.877214 3350 memory_manager.go:354] "RemoveStaleState removing state" podUID="a863c6f7-9d33-4cc7-acb9-6720fe35112d" containerName="calico-kube-controllers" Jan 13 21:30:55.901276 systemd[1]: Created slice kubepods-besteffort-pod54982bb8_078c_4a0e_9911_57e88454c44a.slice - libcontainer container kubepods-besteffort-pod54982bb8_078c_4a0e_9911_57e88454c44a.slice. Jan 13 21:30:56.067688 kubelet[3350]: I0113 21:30:56.067429 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2nc\" (UniqueName: \"kubernetes.io/projected/54982bb8-078c-4a0e-9911-57e88454c44a-kube-api-access-fp2nc\") pod \"calico-kube-controllers-645fd67444-98fpq\" (UID: \"54982bb8-078c-4a0e-9911-57e88454c44a\") " pod="calico-system/calico-kube-controllers-645fd67444-98fpq" Jan 13 21:30:56.067688 kubelet[3350]: I0113 21:30:56.067541 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54982bb8-078c-4a0e-9911-57e88454c44a-tigera-ca-bundle\") pod \"calico-kube-controllers-645fd67444-98fpq\" (UID: \"54982bb8-078c-4a0e-9911-57e88454c44a\") " pod="calico-system/calico-kube-controllers-645fd67444-98fpq" Jan 13 21:30:56.215424 containerd[1971]: time="2025-01-13T21:30:56.215369986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645fd67444-98fpq,Uid:54982bb8-078c-4a0e-9911-57e88454c44a,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:56.546857 (udev-worker)[6165]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:30:56.547714 systemd-networkd[1894]: cali1b41b7f208b: Link UP Jan 13 21:30:56.550233 systemd-networkd[1894]: cali1b41b7f208b: Gained carrier Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.328 [INFO][6195] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0 calico-kube-controllers-645fd67444- calico-system 54982bb8-078c-4a0e-9911-57e88454c44a 1190 0 2025-01-13 21:30:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645fd67444 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-253 calico-kube-controllers-645fd67444-98fpq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1b41b7f208b [] []}} ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.332 [INFO][6195] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.381 [INFO][6211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" HandleID="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.497 [INFO][6211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" HandleID="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-253", "pod":"calico-kube-controllers-645fd67444-98fpq", "timestamp":"2025-01-13 21:30:56.381186635 +0000 UTC"}, Hostname:"ip-172-31-18-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.498 [INFO][6211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.498 [INFO][6211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.498 [INFO][6211] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-253' Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.501 [INFO][6211] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.507 [INFO][6211] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.513 [INFO][6211] ipam/ipam.go 489: Trying affinity for 192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.517 [INFO][6211] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.520 [INFO][6211] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.520 [INFO][6211] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.523 [INFO][6211] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.529 [INFO][6211] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.539 [INFO][6211] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.71/26] block=192.168.74.64/26 handle="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.539 [INFO][6211] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.71/26] handle="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" host="ip-172-31-18-253" Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.539 [INFO][6211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:56.580814 containerd[1971]: 2025-01-13 21:30:56.539 [INFO][6211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.71/26] IPv6=[] ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" HandleID="k8s-pod-network.3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.543 [INFO][6195] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0", GenerateName:"calico-kube-controllers-645fd67444-", Namespace:"calico-system", SelfLink:"", UID:"54982bb8-078c-4a0e-9911-57e88454c44a", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645fd67444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"", Pod:"calico-kube-controllers-645fd67444-98fpq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1b41b7f208b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.543 [INFO][6195] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.71/32] ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.544 [INFO][6195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b41b7f208b ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.552 [INFO][6195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.555 [INFO][6195] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0", GenerateName:"calico-kube-controllers-645fd67444-", Namespace:"calico-system", SelfLink:"", UID:"54982bb8-078c-4a0e-9911-57e88454c44a", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645fd67444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-253", ContainerID:"3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e", Pod:"calico-kube-controllers-645fd67444-98fpq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1b41b7f208b", MAC:"6e:49:aa:41:2b:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:56.584752 containerd[1971]: 2025-01-13 21:30:56.574 [INFO][6195] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e" Namespace="calico-system" Pod="calico-kube-controllers-645fd67444-98fpq" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--645fd67444--98fpq-eth0" Jan 13 21:30:56.633704 containerd[1971]: time="2025-01-13T21:30:56.632590185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:56.633704 containerd[1971]: time="2025-01-13T21:30:56.632776141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:56.633704 containerd[1971]: time="2025-01-13T21:30:56.632822981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:56.633704 containerd[1971]: time="2025-01-13T21:30:56.633198051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:56.677174 systemd[1]: Started cri-containerd-3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e.scope - libcontainer container 3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e. Jan 13 21:30:56.730709 containerd[1971]: time="2025-01-13T21:30:56.730319615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645fd67444-98fpq,Uid:54982bb8-078c-4a0e-9911-57e88454c44a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e\"" Jan 13 21:30:56.765001 containerd[1971]: time="2025-01-13T21:30:56.764960757Z" level=info msg="CreateContainer within sandbox \"3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:30:56.793470 containerd[1971]: time="2025-01-13T21:30:56.793353313Z" level=info msg="CreateContainer within sandbox \"3292812191aacb91a5b40eedcb1268693efb6de26702c6d62e846f0201f2fe8e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116\"" Jan 13 21:30:56.795626 containerd[1971]: time="2025-01-13T21:30:56.794260531Z" level=info msg="StartContainer for \"96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116\"" Jan 13 21:30:56.806731 kubelet[3350]: I0113 21:30:56.806609 3350 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a863c6f7-9d33-4cc7-acb9-6720fe35112d" path="/var/lib/kubelet/pods/a863c6f7-9d33-4cc7-acb9-6720fe35112d/volumes" Jan 13 21:30:56.831869 systemd[1]: Started cri-containerd-96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116.scope - libcontainer container 96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116. Jan 13 21:30:56.890097 containerd[1971]: time="2025-01-13T21:30:56.890046194Z" level=info msg="StartContainer for \"96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116\" returns successfully" Jan 13 21:30:57.588116 systemd-networkd[1894]: cali1b41b7f208b: Gained IPv6LL Jan 13 21:30:57.769760 kubelet[3350]: I0113 21:30:57.769035 3350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-645fd67444-98fpq" podStartSLOduration=2.769008335 podStartE2EDuration="2.769008335s" podCreationTimestamp="2025-01-13 21:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:57.765926055 +0000 UTC m=+77.235603525" watchObservedRunningTime="2025-01-13 21:30:57.769008335 +0000 UTC m=+77.238685807" Jan 13 21:30:59.001564 systemd[1]: Started sshd@16-172.31.18.253:22-147.75.109.163:44408.service - OpenSSH per-connection server daemon (147.75.109.163:44408). Jan 13 21:30:59.117052 systemd[1]: cri-containerd-d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7.scope: Deactivated successfully. Jan 13 21:30:59.158741 containerd[1971]: time="2025-01-13T21:30:59.158672269Z" level=info msg="shim disconnected" id=d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7 namespace=k8s.io Jan 13 21:30:59.161037 containerd[1971]: time="2025-01-13T21:30:59.160499913Z" level=warning msg="cleaning up after shim disconnected" id=d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7 namespace=k8s.io Jan 13 21:30:59.161037 containerd[1971]: time="2025-01-13T21:30:59.160556469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:59.168434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7-rootfs.mount: Deactivated successfully. Jan 13 21:30:59.190683 containerd[1971]: time="2025-01-13T21:30:59.190535888Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:30:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:30:59.233102 sshd[6384]: Accepted publickey for core from 147.75.109.163 port 44408 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:30:59.236809 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:59.249052 systemd-logind[1953]: New session 17 of user core. Jan 13 21:30:59.252190 containerd[1971]: time="2025-01-13T21:30:59.251074679Z" level=info msg="StopContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" returns successfully" Jan 13 21:30:59.252190 containerd[1971]: time="2025-01-13T21:30:59.251906200Z" level=info msg="StopPodSandbox for \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\"" Jan 13 21:30:59.252190 containerd[1971]: time="2025-01-13T21:30:59.251954132Z" level=info msg="Container to stop \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:59.251834 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:30:59.266632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5-shm.mount: Deactivated successfully. Jan 13 21:30:59.284454 systemd[1]: cri-containerd-4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5.scope: Deactivated successfully. Jan 13 21:30:59.326668 containerd[1971]: time="2025-01-13T21:30:59.324330349Z" level=info msg="shim disconnected" id=4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5 namespace=k8s.io Jan 13 21:30:59.326668 containerd[1971]: time="2025-01-13T21:30:59.324397914Z" level=warning msg="cleaning up after shim disconnected" id=4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5 namespace=k8s.io Jan 13 21:30:59.326668 containerd[1971]: time="2025-01-13T21:30:59.324410483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:59.328346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5-rootfs.mount: Deactivated successfully. Jan 13 21:30:59.370744 containerd[1971]: time="2025-01-13T21:30:59.370696867Z" level=info msg="TearDown network for sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" successfully" Jan 13 21:30:59.370744 containerd[1971]: time="2025-01-13T21:30:59.370732065Z" level=info msg="StopPodSandbox for \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" returns successfully" Jan 13 21:30:59.499079 kubelet[3350]: I0113 21:30:59.496947 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r74z\" (UniqueName: \"kubernetes.io/projected/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-kube-api-access-6r74z\") pod \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " Jan 13 21:30:59.499079 kubelet[3350]: I0113 21:30:59.496998 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-typha-certs\") pod \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " Jan 13 21:30:59.499079 kubelet[3350]: I0113 21:30:59.497028 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-tigera-ca-bundle\") pod \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\" (UID: \"cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65\") " Jan 13 21:30:59.516192 systemd[1]: var-lib-kubelet-pods-cac0d9f0\x2d2f5d\x2d4f9a\x2d9fb2\x2d44ea3d420d65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6r74z.mount: Deactivated successfully. Jan 13 21:30:59.519580 kubelet[3350]: I0113 21:30:59.519475 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-kube-api-access-6r74z" (OuterVolumeSpecName: "kube-api-access-6r74z") pod "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" (UID: "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65"). InnerVolumeSpecName "kube-api-access-6r74z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:30:59.523534 kubelet[3350]: I0113 21:30:59.521426 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" (UID: "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:30:59.524101 kubelet[3350]: I0113 21:30:59.523971 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" (UID: "cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:30:59.598854 kubelet[3350]: I0113 21:30:59.597484 3350 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6r74z\" (UniqueName: \"kubernetes.io/projected/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-kube-api-access-6r74z\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:30:59.598854 kubelet[3350]: I0113 21:30:59.597526 3350 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-typha-certs\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:30:59.598854 kubelet[3350]: I0113 21:30:59.597543 3350 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65-tigera-ca-bundle\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:30:59.786373 systemd[1]: Removed slice kubepods-besteffort-podcac0d9f0_2f5d_4f9a_9fb2_44ea3d420d65.slice - libcontainer container kubepods-besteffort-podcac0d9f0_2f5d_4f9a_9fb2_44ea3d420d65.slice. Jan 13 21:30:59.787419 kubelet[3350]: I0113 21:30:59.786695 3350 scope.go:117] "RemoveContainer" containerID="d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7" Jan 13 21:30:59.822570 containerd[1971]: time="2025-01-13T21:30:59.821826289Z" level=info msg="RemoveContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\"" Jan 13 21:30:59.832445 containerd[1971]: time="2025-01-13T21:30:59.832350286Z" level=info msg="RemoveContainer for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" returns successfully" Jan 13 21:30:59.860952 sshd[6384]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:59.866787 systemd-logind[1953]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:30:59.875016 systemd[1]: sshd@16-172.31.18.253:22-147.75.109.163:44408.service: Deactivated successfully. Jan 13 21:30:59.895954 kubelet[3350]: I0113 21:30:59.887379 3350 scope.go:117] "RemoveContainer" containerID="d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7" Jan 13 21:30:59.892850 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:30:59.897397 systemd-logind[1953]: Removed session 17. Jan 13 21:30:59.976860 systemd[1]: var-lib-kubelet-pods-cac0d9f0\x2d2f5d\x2d4f9a\x2d9fb2\x2d44ea3d420d65-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 13 21:30:59.977546 systemd[1]: var-lib-kubelet-pods-cac0d9f0\x2d2f5d\x2d4f9a\x2d9fb2\x2d44ea3d420d65-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 13 21:30:59.979771 containerd[1971]: time="2025-01-13T21:30:59.913571000Z" level=error msg="ContainerStatus for \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\": not found" Jan 13 21:31:00.083177 ntpd[1948]: Listen normally on 16 cali1b41b7f208b [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 21:31:00.085883 ntpd[1948]: 13 Jan 21:31:00 ntpd[1948]: Listen normally on 16 cali1b41b7f208b [fe80::ecee:eeff:feee:eeee%13]:123 Jan 13 21:31:00.085883 ntpd[1948]: 13 Jan 21:31:00 ntpd[1948]: Deleting interface #11 cali27d41bf8a2e, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=29 secs Jan 13 21:31:00.084760 ntpd[1948]: Deleting interface #11 cali27d41bf8a2e, fe80::ecee:eeff:feee:eeee%6#123, interface stats: received=0, sent=0, dropped=0, active_time=29 secs Jan 13 21:31:00.099177 kubelet[3350]: E0113 21:31:00.099106 3350 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\": not found" containerID="d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7" Jan 13 21:31:00.099334 kubelet[3350]: I0113 21:31:00.099194 3350 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7"} err="failed to get container status \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5e3462f1620618fe2cea82300f062d8e2b7a98bf57fb7c82139c5d88ff42ff7\": not found" Jan 13 21:31:00.808504 kubelet[3350]: I0113 21:31:00.808454 3350 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" path="/var/lib/kubelet/pods/cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65/volumes" Jan 13 21:31:04.906384 systemd[1]: Started sshd@17-172.31.18.253:22-147.75.109.163:44416.service - OpenSSH per-connection server daemon (147.75.109.163:44416). Jan 13 21:31:05.079218 sshd[6582]: Accepted publickey for core from 147.75.109.163 port 44416 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:05.080130 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:05.087177 systemd-logind[1953]: New session 18 of user core. Jan 13 21:31:05.093969 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:31:05.409934 sshd[6582]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:05.413756 systemd[1]: sshd@17-172.31.18.253:22-147.75.109.163:44416.service: Deactivated successfully. Jan 13 21:31:05.417626 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:31:05.419926 systemd-logind[1953]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:31:05.423211 systemd-logind[1953]: Removed session 18. Jan 13 21:31:10.447304 systemd[1]: Started sshd@18-172.31.18.253:22-147.75.109.163:39490.service - OpenSSH per-connection server daemon (147.75.109.163:39490). Jan 13 21:31:10.669258 sshd[6717]: Accepted publickey for core from 147.75.109.163 port 39490 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:10.671921 sshd[6717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:10.682503 systemd-logind[1953]: New session 19 of user core. Jan 13 21:31:10.687946 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:31:11.150326 sshd[6717]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:11.157199 systemd[1]: sshd@18-172.31.18.253:22-147.75.109.163:39490.service: Deactivated successfully. Jan 13 21:31:11.160097 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:31:11.161156 systemd-logind[1953]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:31:11.163104 systemd-logind[1953]: Removed session 19. Jan 13 21:31:16.191041 systemd[1]: Started sshd@19-172.31.18.253:22-147.75.109.163:39502.service - OpenSSH per-connection server daemon (147.75.109.163:39502). Jan 13 21:31:16.420114 sshd[6812]: Accepted publickey for core from 147.75.109.163 port 39502 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:16.426972 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:16.441337 systemd-logind[1953]: New session 20 of user core. Jan 13 21:31:16.444908 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:31:16.790184 sshd[6812]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:16.796726 systemd[1]: sshd@19-172.31.18.253:22-147.75.109.163:39502.service: Deactivated successfully. Jan 13 21:31:16.799583 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:31:16.801135 systemd-logind[1953]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:31:16.806608 systemd-logind[1953]: Removed session 20. Jan 13 21:31:16.825158 systemd[1]: Started sshd@20-172.31.18.253:22-147.75.109.163:39512.service - OpenSSH per-connection server daemon (147.75.109.163:39512). Jan 13 21:31:17.013696 sshd[6839]: Accepted publickey for core from 147.75.109.163 port 39512 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:17.015724 sshd[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:17.023027 systemd-logind[1953]: New session 21 of user core. Jan 13 21:31:17.028881 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:31:20.440365 sshd[6839]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:20.447667 systemd-logind[1953]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:31:20.449750 systemd[1]: sshd@20-172.31.18.253:22-147.75.109.163:39512.service: Deactivated successfully. Jan 13 21:31:20.455833 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:31:20.512781 systemd[1]: Started sshd@21-172.31.18.253:22-147.75.109.163:60034.service - OpenSSH per-connection server daemon (147.75.109.163:60034). Jan 13 21:31:20.514795 systemd-logind[1953]: Removed session 21. Jan 13 21:31:20.727598 sshd[6918]: Accepted publickey for core from 147.75.109.163 port 60034 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:20.733024 sshd[6918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:20.740538 systemd-logind[1953]: New session 22 of user core. Jan 13 21:31:20.745848 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:31:24.216440 sshd[6918]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:24.249293 systemd[1]: sshd@21-172.31.18.253:22-147.75.109.163:60034.service: Deactivated successfully. Jan 13 21:31:24.256481 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:31:24.277734 systemd-logind[1953]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:31:24.287426 systemd[1]: Started sshd@22-172.31.18.253:22-147.75.109.163:60036.service - OpenSSH per-connection server daemon (147.75.109.163:60036). Jan 13 21:31:24.291845 systemd-logind[1953]: Removed session 22. Jan 13 21:31:24.529613 sshd[6987]: Accepted publickey for core from 147.75.109.163 port 60036 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:24.533180 sshd[6987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:24.538717 systemd-logind[1953]: New session 23 of user core. Jan 13 21:31:24.554434 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:31:25.735041 sshd[6987]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:25.741370 systemd[1]: sshd@22-172.31.18.253:22-147.75.109.163:60036.service: Deactivated successfully. Jan 13 21:31:25.745012 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:31:25.747751 systemd-logind[1953]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:31:25.749522 systemd-logind[1953]: Removed session 23. Jan 13 21:31:25.784425 systemd[1]: Started sshd@23-172.31.18.253:22-147.75.109.163:60040.service - OpenSSH per-connection server daemon (147.75.109.163:60040). Jan 13 21:31:26.019480 sshd[7016]: Accepted publickey for core from 147.75.109.163 port 60040 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:26.021041 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:26.028802 systemd-logind[1953]: New session 24 of user core. Jan 13 21:31:26.038065 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:31:26.373205 sshd[7016]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:26.403680 systemd[1]: run-containerd-runc-k8s.io-96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116-runc.CfVJTV.mount: Deactivated successfully. Jan 13 21:31:26.408533 systemd[1]: sshd@23-172.31.18.253:22-147.75.109.163:60040.service: Deactivated successfully. Jan 13 21:31:26.418080 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:31:26.427528 systemd-logind[1953]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:31:26.433035 systemd-logind[1953]: Removed session 24. Jan 13 21:31:28.967943 systemd[1]: run-containerd-runc-k8s.io-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625-runc.OeetVk.mount: Deactivated successfully. Jan 13 21:31:31.421981 systemd[1]: Started sshd@24-172.31.18.253:22-147.75.109.163:56700.service - OpenSSH per-connection server daemon (147.75.109.163:56700). Jan 13 21:31:31.598906 sshd[7158]: Accepted publickey for core from 147.75.109.163 port 56700 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:31.603809 sshd[7158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:31.633872 systemd-logind[1953]: New session 25 of user core. Jan 13 21:31:31.635900 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:31:31.972750 sshd[7158]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:31.979125 systemd-logind[1953]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:31:31.981282 systemd[1]: sshd@24-172.31.18.253:22-147.75.109.163:56700.service: Deactivated successfully. Jan 13 21:31:31.984000 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:31:31.986405 systemd-logind[1953]: Removed session 25. Jan 13 21:31:37.010126 systemd[1]: Started sshd@25-172.31.18.253:22-147.75.109.163:56716.service - OpenSSH per-connection server daemon (147.75.109.163:56716). Jan 13 21:31:37.249793 sshd[7290]: Accepted publickey for core from 147.75.109.163 port 56716 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:37.252079 sshd[7290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:37.271258 systemd-logind[1953]: New session 26 of user core. Jan 13 21:31:37.279895 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:31:37.545861 systemd[1]: run-containerd-runc-k8s.io-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625-runc.84n0xn.mount: Deactivated successfully. Jan 13 21:31:37.595910 sshd[7290]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:37.606935 systemd[1]: sshd@25-172.31.18.253:22-147.75.109.163:56716.service: Deactivated successfully. Jan 13 21:31:37.613905 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:31:37.619533 systemd-logind[1953]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:31:37.622106 systemd-logind[1953]: Removed session 26. Jan 13 21:31:37.734290 containerd[1971]: time="2025-01-13T21:31:37.727436446Z" level=info msg="StopContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" with timeout 5 (s)" Jan 13 21:31:37.769890 containerd[1971]: time="2025-01-13T21:31:37.734771599Z" level=info msg="Stop container \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" with signal terminated" Jan 13 21:31:37.754880 systemd[1]: cri-containerd-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625.scope: Deactivated successfully. Jan 13 21:31:37.755456 systemd[1]: cri-containerd-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625.scope: Consumed 11.066s CPU time. Jan 13 21:31:37.799109 containerd[1971]: time="2025-01-13T21:31:37.798812590Z" level=info msg="shim disconnected" id=6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625 namespace=k8s.io Jan 13 21:31:37.799564 containerd[1971]: time="2025-01-13T21:31:37.799078802Z" level=warning msg="cleaning up after shim disconnected" id=6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625 namespace=k8s.io Jan 13 21:31:37.799564 containerd[1971]: time="2025-01-13T21:31:37.799435249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:31:37.803792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625-rootfs.mount: Deactivated successfully. Jan 13 21:31:37.898106 containerd[1971]: time="2025-01-13T21:31:37.898060632Z" level=info msg="StopContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" returns successfully" Jan 13 21:31:37.898995 containerd[1971]: time="2025-01-13T21:31:37.898939160Z" level=info msg="StopPodSandbox for \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\"" Jan 13 21:31:37.899367 containerd[1971]: time="2025-01-13T21:31:37.899010372Z" level=info msg="Container to stop \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:31:37.899367 containerd[1971]: time="2025-01-13T21:31:37.899082585Z" level=info msg="Container to stop \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:31:37.899367 containerd[1971]: time="2025-01-13T21:31:37.899099455Z" level=info msg="Container to stop \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:31:37.909530 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f-shm.mount: Deactivated successfully. Jan 13 21:31:37.919334 systemd[1]: cri-containerd-e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f.scope: Deactivated successfully. Jan 13 21:31:37.968113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f-rootfs.mount: Deactivated successfully. Jan 13 21:31:37.971821 containerd[1971]: time="2025-01-13T21:31:37.968306513Z" level=info msg="shim disconnected" id=e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f namespace=k8s.io Jan 13 21:31:37.971821 containerd[1971]: time="2025-01-13T21:31:37.968368055Z" level=warning msg="cleaning up after shim disconnected" id=e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f namespace=k8s.io Jan 13 21:31:37.971821 containerd[1971]: time="2025-01-13T21:31:37.968380677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:31:38.008148 containerd[1971]: time="2025-01-13T21:31:38.008096538Z" level=info msg="TearDown network for sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" successfully" Jan 13 21:31:38.008148 containerd[1971]: time="2025-01-13T21:31:38.008150503Z" level=info msg="StopPodSandbox for \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" returns successfully" Jan 13 21:31:38.174626 kubelet[3350]: I0113 21:31:38.173436 3350 scope.go:117] "RemoveContainer" containerID="6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625" Jan 13 21:31:38.178927 kubelet[3350]: I0113 21:31:38.178896 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-xtables-lock\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179156 kubelet[3350]: I0113 21:31:38.179068 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-run-calico\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179156 kubelet[3350]: I0113 21:31:38.179123 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-lib-calico\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179156 kubelet[3350]: I0113 21:31:38.179146 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-lib-modules\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179310 kubelet[3350]: I0113 21:31:38.179173 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-bin-dir\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179310 kubelet[3350]: I0113 21:31:38.179219 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-flexvol-driver-host\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179310 kubelet[3350]: I0113 21:31:38.179256 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-tigera-ca-bundle\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.179310 kubelet[3350]: I0113 21:31:38.179299 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-log-dir\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.182042 kubelet[3350]: I0113 21:31:38.179320 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-net-dir\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.182042 kubelet[3350]: I0113 21:31:38.179365 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-policysync\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.182042 kubelet[3350]: I0113 21:31:38.179394 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58gr8\" (UniqueName: \"kubernetes.io/projected/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-kube-api-access-58gr8\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.182042 kubelet[3350]: I0113 21:31:38.179440 3350 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-node-certs\") pod \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\" (UID: \"15779d9e-e5e1-4ea9-8a63-9efe7092cdc5\") " Jan 13 21:31:38.184037 kubelet[3350]: E0113 21:31:38.183941 3350 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" containerName="flexvol-driver" Jan 13 21:31:38.184037 kubelet[3350]: E0113 21:31:38.183965 3350 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" containerName="calico-typha" Jan 13 21:31:38.184037 kubelet[3350]: E0113 21:31:38.183976 3350 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" containerName="install-cni" Jan 13 21:31:38.184037 kubelet[3350]: E0113 21:31:38.183983 3350 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" containerName="calico-node" Jan 13 21:31:38.187442 kubelet[3350]: I0113 21:31:38.187094 3350 memory_manager.go:354] "RemoveStaleState removing state" podUID="15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" containerName="calico-node" Jan 13 21:31:38.187442 kubelet[3350]: I0113 21:31:38.187116 3350 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac0d9f0-2f5d-4f9a-9fb2-44ea3d420d65" containerName="calico-typha" Jan 13 21:31:38.212188 kubelet[3350]: I0113 21:31:38.207936 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.229194 kubelet[3350]: I0113 21:31:38.228340 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.229194 kubelet[3350]: I0113 21:31:38.228422 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.229194 kubelet[3350]: I0113 21:31:38.228446 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.229194 kubelet[3350]: I0113 21:31:38.229068 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.229194 kubelet[3350]: I0113 21:31:38.229103 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.232100 kubelet[3350]: I0113 21:31:38.231622 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-node-certs" (OuterVolumeSpecName: "node-certs") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:31:38.240721 kubelet[3350]: I0113 21:31:38.240634 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:31:38.242215 kubelet[3350]: I0113 21:31:38.241575 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.242215 kubelet[3350]: I0113 21:31:38.241610 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.242215 kubelet[3350]: I0113 21:31:38.241632 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-policysync" (OuterVolumeSpecName: "policysync") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:31:38.252189 kubelet[3350]: I0113 21:31:38.251471 3350 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-kube-api-access-58gr8" (OuterVolumeSpecName: "kube-api-access-58gr8") pod "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" (UID: "15779d9e-e5e1-4ea9-8a63-9efe7092cdc5"). InnerVolumeSpecName "kube-api-access-58gr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:31:38.268325 containerd[1971]: time="2025-01-13T21:31:38.268275177Z" level=info msg="RemoveContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\"" Jan 13 21:31:38.271079 systemd[1]: Created slice kubepods-besteffort-podd5af18d4_b54f_41cf_8d9a_530ed74a709a.slice - libcontainer container kubepods-besteffort-podd5af18d4_b54f_41cf_8d9a_530ed74a709a.slice. Jan 13 21:31:38.279274 containerd[1971]: time="2025-01-13T21:31:38.279146412Z" level=info msg="RemoveContainer for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" returns successfully" Jan 13 21:31:38.285090 kubelet[3350]: I0113 21:31:38.284955 3350 scope.go:117] "RemoveContainer" containerID="df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287247 3350 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-58gr8\" (UniqueName: \"kubernetes.io/projected/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-kube-api-access-58gr8\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287275 3350 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-node-certs\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287290 3350 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-xtables-lock\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287305 3350 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-run-calico\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287319 3350 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-var-lib-calico\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287332 3350 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-lib-modules\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287344 3350 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-bin-dir\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.287959 kubelet[3350]: I0113 21:31:38.287358 3350 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-flexvol-driver-host\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.289298 kubelet[3350]: I0113 21:31:38.287372 3350 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-tigera-ca-bundle\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.289298 kubelet[3350]: I0113 21:31:38.287384 3350 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-log-dir\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.289298 kubelet[3350]: I0113 21:31:38.287397 3350 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-cni-net-dir\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.289298 kubelet[3350]: I0113 21:31:38.287408 3350 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5-policysync\") on node \"ip-172-31-18-253\" DevicePath \"\"" Jan 13 21:31:38.291891 containerd[1971]: time="2025-01-13T21:31:38.291858992Z" level=info msg="RemoveContainer for \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\"" Jan 13 21:31:38.301276 containerd[1971]: time="2025-01-13T21:31:38.300808810Z" level=info msg="RemoveContainer for \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\" returns successfully" Jan 13 21:31:38.301628 kubelet[3350]: I0113 21:31:38.301306 3350 scope.go:117] "RemoveContainer" containerID="8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a" Jan 13 21:31:38.304341 containerd[1971]: time="2025-01-13T21:31:38.304309244Z" level=info msg="RemoveContainer for \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\"" Jan 13 21:31:38.314140 containerd[1971]: time="2025-01-13T21:31:38.314051568Z" level=info msg="RemoveContainer for \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\" returns successfully" Jan 13 21:31:38.314397 kubelet[3350]: I0113 21:31:38.314360 3350 scope.go:117] "RemoveContainer" containerID="6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625" Jan 13 21:31:38.315126 containerd[1971]: time="2025-01-13T21:31:38.315037387Z" level=error msg="ContainerStatus for \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\": not found" Jan 13 21:31:38.315448 kubelet[3350]: E0113 21:31:38.315408 3350 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\": not found" containerID="6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625" Jan 13 21:31:38.315661 kubelet[3350]: I0113 21:31:38.315446 3350 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625"} err="failed to get container status \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\": rpc error: code = NotFound desc = an error occurred when try to find container \"6032a8d2536c4f450a6ec2152c7b1c2a55c2d86ca7d841260dfed1cd81248625\": not found" Jan 13 21:31:38.315661 kubelet[3350]: I0113 21:31:38.315477 3350 scope.go:117] "RemoveContainer" containerID="df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23" Jan 13 21:31:38.315946 containerd[1971]: time="2025-01-13T21:31:38.315904502Z" level=error msg="ContainerStatus for \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\": not found" Jan 13 21:31:38.316218 kubelet[3350]: E0113 21:31:38.316043 3350 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\": not found" containerID="df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23" Jan 13 21:31:38.316218 kubelet[3350]: I0113 21:31:38.316064 3350 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23"} err="failed to get container status \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\": rpc error: code = NotFound desc = an error occurred when try to find container \"df49fcbeb7eb396f10be825e280d757830296b7e8fa8f3727eac9ec59ca4ba23\": not found" Jan 13 21:31:38.316218 kubelet[3350]: I0113 21:31:38.316080 3350 scope.go:117] "RemoveContainer" containerID="8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a" Jan 13 21:31:38.316497 containerd[1971]: time="2025-01-13T21:31:38.316461734Z" level=error msg="ContainerStatus for \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\": not found" Jan 13 21:31:38.316728 kubelet[3350]: E0113 21:31:38.316688 3350 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\": not found" containerID="8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a" Jan 13 21:31:38.316728 kubelet[3350]: I0113 21:31:38.316717 3350 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a"} err="failed to get container status \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8387cbaef64d572ffde92ecb29d165acd739e7bba06dc843362f10a2cb83c53a\": not found" Jan 13 21:31:38.388311 kubelet[3350]: I0113 21:31:38.388266 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-var-run-calico\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388311 kubelet[3350]: I0113 21:31:38.388315 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-lib-modules\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388539 kubelet[3350]: I0113 21:31:38.388340 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-cni-net-dir\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388539 kubelet[3350]: I0113 21:31:38.388363 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-cni-bin-dir\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388539 kubelet[3350]: I0113 21:31:38.388387 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5af18d4-b54f-41cf-8d9a-530ed74a709a-tigera-ca-bundle\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388539 kubelet[3350]: I0113 21:31:38.388411 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-xtables-lock\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388539 kubelet[3350]: I0113 21:31:38.388432 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-policysync\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388871 kubelet[3350]: I0113 21:31:38.388452 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-flexvol-driver-host\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388871 kubelet[3350]: I0113 21:31:38.388478 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx768\" (UniqueName: \"kubernetes.io/projected/d5af18d4-b54f-41cf-8d9a-530ed74a709a-kube-api-access-kx768\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388871 kubelet[3350]: I0113 21:31:38.388507 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5af18d4-b54f-41cf-8d9a-530ed74a709a-node-certs\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388871 kubelet[3350]: I0113 21:31:38.388531 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-var-lib-calico\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.388871 kubelet[3350]: I0113 21:31:38.388554 3350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5af18d4-b54f-41cf-8d9a-530ed74a709a-cni-log-dir\") pod \"calico-node-k4dxq\" (UID: \"d5af18d4-b54f-41cf-8d9a-530ed74a709a\") " pod="calico-system/calico-node-k4dxq" Jan 13 21:31:38.419382 systemd[1]: Removed slice kubepods-besteffort-pod15779d9e_e5e1_4ea9_8a63_9efe7092cdc5.slice - libcontainer container kubepods-besteffort-pod15779d9e_e5e1_4ea9_8a63_9efe7092cdc5.slice. Jan 13 21:31:38.421379 systemd[1]: kubepods-besteffort-pod15779d9e_e5e1_4ea9_8a63_9efe7092cdc5.slice: Consumed 11.657s CPU time. Jan 13 21:31:38.534347 systemd[1]: var-lib-kubelet-pods-15779d9e\x2de5e1\x2d4ea9\x2d8a63\x2d9efe7092cdc5-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 13 21:31:38.535213 systemd[1]: var-lib-kubelet-pods-15779d9e\x2de5e1\x2d4ea9\x2d8a63\x2d9efe7092cdc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58gr8.mount: Deactivated successfully. Jan 13 21:31:38.535454 systemd[1]: var-lib-kubelet-pods-15779d9e\x2de5e1\x2d4ea9\x2d8a63\x2d9efe7092cdc5-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 13 21:31:38.588488 containerd[1971]: time="2025-01-13T21:31:38.588029268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4dxq,Uid:d5af18d4-b54f-41cf-8d9a-530ed74a709a,Namespace:calico-system,Attempt:0,}" Jan 13 21:31:38.632506 containerd[1971]: time="2025-01-13T21:31:38.632302147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:31:38.632506 containerd[1971]: time="2025-01-13T21:31:38.632384136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:31:38.632506 containerd[1971]: time="2025-01-13T21:31:38.632405186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:38.634877 containerd[1971]: time="2025-01-13T21:31:38.632588709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:38.681873 systemd[1]: Started cri-containerd-d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78.scope - libcontainer container d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78. Jan 13 21:31:38.753987 containerd[1971]: time="2025-01-13T21:31:38.753258944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k4dxq,Uid:d5af18d4-b54f-41cf-8d9a-530ed74a709a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\"" Jan 13 21:31:38.800694 containerd[1971]: time="2025-01-13T21:31:38.799092461Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:31:38.805490 kubelet[3350]: I0113 21:31:38.805441 3350 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15779d9e-e5e1-4ea9-8a63-9efe7092cdc5" path="/var/lib/kubelet/pods/15779d9e-e5e1-4ea9-8a63-9efe7092cdc5/volumes" Jan 13 21:31:38.824675 containerd[1971]: time="2025-01-13T21:31:38.824606034Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc\"" Jan 13 21:31:38.827284 containerd[1971]: time="2025-01-13T21:31:38.827204851Z" level=info msg="StartContainer for \"adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc\"" Jan 13 21:31:38.872874 systemd[1]: Started cri-containerd-adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc.scope - libcontainer container adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc. Jan 13 21:31:38.922243 containerd[1971]: time="2025-01-13T21:31:38.922164743Z" level=info msg="StartContainer for \"adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc\" returns successfully" Jan 13 21:31:40.005572 systemd[1]: cri-containerd-adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc.scope: Deactivated successfully. Jan 13 21:31:40.081925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc-rootfs.mount: Deactivated successfully. Jan 13 21:31:40.106518 containerd[1971]: time="2025-01-13T21:31:40.106339138Z" level=info msg="shim disconnected" id=adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc namespace=k8s.io Jan 13 21:31:40.106518 containerd[1971]: time="2025-01-13T21:31:40.106514611Z" level=warning msg="cleaning up after shim disconnected" id=adfaa42cbf90a2ed74425f9efb012fe3def275fd0d9f2ae983542e9a2c15cafc namespace=k8s.io Jan 13 21:31:40.107161 containerd[1971]: time="2025-01-13T21:31:40.106533961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:31:40.130237 containerd[1971]: time="2025-01-13T21:31:40.130172740Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:31:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:31:41.124312 containerd[1971]: time="2025-01-13T21:31:41.124266546Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:31:41.160996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600233165.mount: Deactivated successfully. Jan 13 21:31:41.161824 containerd[1971]: time="2025-01-13T21:31:41.161782814Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647\"" Jan 13 21:31:41.167095 containerd[1971]: time="2025-01-13T21:31:41.166335961Z" level=info msg="StartContainer for \"43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647\"" Jan 13 21:31:41.298656 systemd[1]: Started cri-containerd-43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647.scope - libcontainer container 43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647. Jan 13 21:31:41.360594 containerd[1971]: time="2025-01-13T21:31:41.360550017Z" level=info msg="StartContainer for \"43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647\" returns successfully" Jan 13 21:31:42.632525 systemd[1]: Started sshd@26-172.31.18.253:22-147.75.109.163:47670.service - OpenSSH per-connection server daemon (147.75.109.163:47670). Jan 13 21:31:42.908466 sshd[7547]: Accepted publickey for core from 147.75.109.163 port 47670 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:42.931233 sshd[7547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:42.944338 systemd-logind[1953]: New session 27 of user core. Jan 13 21:31:42.950954 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:31:43.443271 kubelet[3350]: I0113 21:31:43.442415 3350 scope.go:117] "RemoveContainer" containerID="69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193" Jan 13 21:31:43.684256 containerd[1971]: time="2025-01-13T21:31:43.683011858Z" level=info msg="RemoveContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\"" Jan 13 21:31:43.703389 containerd[1971]: time="2025-01-13T21:31:43.701497507Z" level=info msg="RemoveContainer for \"69f28f3286b365a0b0965678c79b4d28d46d640ef96381278e352c5137a6a193\" returns successfully" Jan 13 21:31:43.704173 containerd[1971]: time="2025-01-13T21:31:43.703582621Z" level=info msg="StopPodSandbox for \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\"" Jan 13 21:31:43.704173 containerd[1971]: time="2025-01-13T21:31:43.703696204Z" level=info msg="TearDown network for sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" successfully" Jan 13 21:31:43.704173 containerd[1971]: time="2025-01-13T21:31:43.703713040Z" level=info msg="StopPodSandbox for \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" returns successfully" Jan 13 21:31:43.711243 containerd[1971]: time="2025-01-13T21:31:43.709041437Z" level=info msg="RemovePodSandbox for \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\"" Jan 13 21:31:43.711243 containerd[1971]: time="2025-01-13T21:31:43.709083439Z" level=info msg="Forcibly stopping sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\"" Jan 13 21:31:43.711243 containerd[1971]: time="2025-01-13T21:31:43.709427558Z" level=info msg="TearDown network for sandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" successfully" Jan 13 21:31:43.766678 containerd[1971]: time="2025-01-13T21:31:43.766602794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:31:43.767802 containerd[1971]: time="2025-01-13T21:31:43.767092636Z" level=info msg="RemovePodSandbox \"4a1018334a24462761b3adc6c4e3f4fe72e3f828a642e647df116493d084f8c5\" returns successfully" Jan 13 21:31:43.770329 containerd[1971]: time="2025-01-13T21:31:43.770301521Z" level=info msg="StopPodSandbox for \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\"" Jan 13 21:31:43.771552 containerd[1971]: time="2025-01-13T21:31:43.770416258Z" level=info msg="TearDown network for sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" successfully" Jan 13 21:31:43.771552 containerd[1971]: time="2025-01-13T21:31:43.770432948Z" level=info msg="StopPodSandbox for \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" returns successfully" Jan 13 21:31:43.771552 containerd[1971]: time="2025-01-13T21:31:43.770794810Z" level=info msg="RemovePodSandbox for \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\"" Jan 13 21:31:43.771552 containerd[1971]: time="2025-01-13T21:31:43.770819763Z" level=info msg="Forcibly stopping sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\"" Jan 13 21:31:43.771552 containerd[1971]: time="2025-01-13T21:31:43.770872197Z" level=info msg="TearDown network for sandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" successfully" Jan 13 21:31:43.777831 containerd[1971]: time="2025-01-13T21:31:43.777794344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:31:43.778012 containerd[1971]: time="2025-01-13T21:31:43.777993833Z" level=info msg="RemovePodSandbox \"e0db687c3cbee2fc38c15316cce1a1650151c49eaa9830f2789cac4abf08623f\" returns successfully" Jan 13 21:31:43.779782 containerd[1971]: time="2025-01-13T21:31:43.779747473Z" level=info msg="StopPodSandbox for \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\"" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:43.997 [WARNING][7569] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.003 [INFO][7569] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.003 [INFO][7569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" iface="eth0" netns="" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.004 [INFO][7569] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.004 [INFO][7569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.129 [INFO][7575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.131 [INFO][7575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.131 [INFO][7575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.152 [WARNING][7575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.153 [INFO][7575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.157 [INFO][7575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:44.172703 containerd[1971]: 2025-01-13 21:31:44.161 [INFO][7569] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.172703 containerd[1971]: time="2025-01-13T21:31:44.170814080Z" level=info msg="TearDown network for sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" successfully" Jan 13 21:31:44.172703 containerd[1971]: time="2025-01-13T21:31:44.170950064Z" level=info msg="StopPodSandbox for \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" returns successfully" Jan 13 21:31:44.176971 containerd[1971]: time="2025-01-13T21:31:44.174617504Z" level=info msg="RemovePodSandbox for \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\"" Jan 13 21:31:44.176971 containerd[1971]: time="2025-01-13T21:31:44.174692145Z" level=info msg="Forcibly stopping sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\"" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.319 [WARNING][7594] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" WorkloadEndpoint="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.320 [INFO][7594] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.320 [INFO][7594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" iface="eth0" netns="" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.320 [INFO][7594] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.320 [INFO][7594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.408 [INFO][7600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.409 [INFO][7600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.409 [INFO][7600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.438 [WARNING][7600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.439 [INFO][7600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" HandleID="k8s-pod-network.998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Workload="ip--172--31--18--253-k8s-calico--kube--controllers--64cf758d46--dk7vw-eth0" Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.444 [INFO][7600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:44.453276 containerd[1971]: 2025-01-13 21:31:44.448 [INFO][7594] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8" Jan 13 21:31:44.456437 containerd[1971]: time="2025-01-13T21:31:44.454034018Z" level=info msg="TearDown network for sandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" successfully" Jan 13 21:31:44.470709 containerd[1971]: time="2025-01-13T21:31:44.468719190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:31:44.470709 containerd[1971]: time="2025-01-13T21:31:44.468808990Z" level=info msg="RemovePodSandbox \"998c6adeddc7e7014961bb6942573f2853f3866b31d06b6888e036393c7babf8\" returns successfully" Jan 13 21:31:44.550214 sshd[7547]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:44.559190 systemd[1]: sshd@26-172.31.18.253:22-147.75.109.163:47670.service: Deactivated successfully. Jan 13 21:31:44.562426 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:31:44.565603 systemd-logind[1953]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:31:44.567358 systemd-logind[1953]: Removed session 27. Jan 13 21:31:45.071244 systemd[1]: cri-containerd-43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647.scope: Deactivated successfully. Jan 13 21:31:45.071976 systemd[1]: cri-containerd-43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647.scope: Consumed 1.040s CPU time. Jan 13 21:31:45.178247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647-rootfs.mount: Deactivated successfully. Jan 13 21:31:45.206772 containerd[1971]: time="2025-01-13T21:31:45.206678641Z" level=info msg="shim disconnected" id=43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647 namespace=k8s.io Jan 13 21:31:45.206772 containerd[1971]: time="2025-01-13T21:31:45.206759055Z" level=warning msg="cleaning up after shim disconnected" id=43922db5b6ff3a884ce9c22d6362b34254f92ff9c5c81697ff4bff6c269e7647 namespace=k8s.io Jan 13 21:31:45.206772 containerd[1971]: time="2025-01-13T21:31:45.206772225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:31:45.824324 containerd[1971]: time="2025-01-13T21:31:45.824273715Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:31:45.861259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819808764.mount: Deactivated successfully. Jan 13 21:31:45.865960 containerd[1971]: time="2025-01-13T21:31:45.865916231Z" level=info msg="CreateContainer within sandbox \"d78cc6b6648170a22be72f185539b3c2588fbaa664320c1305b39ad4467b6c78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904\"" Jan 13 21:31:45.867281 containerd[1971]: time="2025-01-13T21:31:45.867247922Z" level=info msg="StartContainer for \"0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904\"" Jan 13 21:31:45.916884 systemd[1]: Started cri-containerd-0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904.scope - libcontainer container 0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904. Jan 13 21:31:45.966711 containerd[1971]: time="2025-01-13T21:31:45.966667080Z" level=info msg="StartContainer for \"0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904\" returns successfully" Jan 13 21:31:46.776651 systemd[1]: run-containerd-runc-k8s.io-0c6c764077a2b02b8e972e91659ea2a64b4ddc21ac1f51cf641ff31b32510904-runc.eT3ki7.mount: Deactivated successfully. Jan 13 21:31:49.110132 (udev-worker)[7891]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:49.112611 (udev-worker)[7890]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:49.613189 systemd[1]: Started sshd@27-172.31.18.253:22-147.75.109.163:50232.service - OpenSSH per-connection server daemon (147.75.109.163:50232). Jan 13 21:31:49.816295 sshd[7931]: Accepted publickey for core from 147.75.109.163 port 50232 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:49.819112 sshd[7931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:49.824703 systemd-logind[1953]: New session 28 of user core. Jan 13 21:31:49.830867 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:31:51.204366 sshd[7931]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:51.211623 systemd-logind[1953]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:31:51.212387 systemd[1]: sshd@27-172.31.18.253:22-147.75.109.163:50232.service: Deactivated successfully. Jan 13 21:31:51.215930 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:31:51.217062 systemd-logind[1953]: Removed session 28. Jan 13 21:31:56.274353 systemd[1]: Started sshd@28-172.31.18.253:22-147.75.109.163:50244.service - OpenSSH per-connection server daemon (147.75.109.163:50244). Jan 13 21:31:56.464476 systemd[1]: run-containerd-runc-k8s.io-96496e38a1645e90f3fdc852bde6559bf23d647d732802c6e1c3f3bb18c00116-runc.5uJKyX.mount: Deactivated successfully. Jan 13 21:31:56.582150 sshd[7956]: Accepted publickey for core from 147.75.109.163 port 50244 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:56.585825 sshd[7956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:56.591904 systemd-logind[1953]: New session 29 of user core. Jan 13 21:31:56.597005 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:31:57.385195 sshd[7956]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:57.388737 systemd[1]: sshd@28-172.31.18.253:22-147.75.109.163:50244.service: Deactivated successfully. Jan 13 21:31:57.391617 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:31:57.394485 systemd-logind[1953]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:31:57.396009 systemd-logind[1953]: Removed session 29. Jan 13 21:32:02.426188 systemd[1]: Started sshd@29-172.31.18.253:22-147.75.109.163:38238.service - OpenSSH per-connection server daemon (147.75.109.163:38238). Jan 13 21:32:02.653970 sshd[8019]: Accepted publickey for core from 147.75.109.163 port 38238 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:02.668804 sshd[8019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:02.680934 systemd-logind[1953]: New session 30 of user core. Jan 13 21:32:02.685904 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:32:03.151193 sshd[8019]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:03.156922 systemd-logind[1953]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:32:03.158279 systemd[1]: sshd@29-172.31.18.253:22-147.75.109.163:38238.service: Deactivated successfully. Jan 13 21:32:03.161107 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:32:03.162863 systemd-logind[1953]: Removed session 30. Jan 13 21:32:18.235347 systemd[1]: cri-containerd-d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7.scope: Deactivated successfully. Jan 13 21:32:18.236484 systemd[1]: cri-containerd-d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7.scope: Consumed 4.234s CPU time, 22.7M memory peak, 0B memory swap peak. Jan 13 21:32:18.425797 containerd[1971]: time="2025-01-13T21:32:18.423493399Z" level=info msg="shim disconnected" id=d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7 namespace=k8s.io Jan 13 21:32:18.425797 containerd[1971]: time="2025-01-13T21:32:18.423584355Z" level=warning msg="cleaning up after shim disconnected" id=d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7 namespace=k8s.io Jan 13 21:32:18.425797 containerd[1971]: time="2025-01-13T21:32:18.423597430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:32:18.429232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7-rootfs.mount: Deactivated successfully. Jan 13 21:32:18.687008 systemd[1]: cri-containerd-44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7.scope: Deactivated successfully. Jan 13 21:32:18.687316 systemd[1]: cri-containerd-44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7.scope: Consumed 2.936s CPU time. Jan 13 21:32:18.725898 containerd[1971]: time="2025-01-13T21:32:18.725772475Z" level=info msg="shim disconnected" id=44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7 namespace=k8s.io Jan 13 21:32:18.728933 containerd[1971]: time="2025-01-13T21:32:18.725876223Z" level=warning msg="cleaning up after shim disconnected" id=44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7 namespace=k8s.io Jan 13 21:32:18.728933 containerd[1971]: time="2025-01-13T21:32:18.726037584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:32:18.728700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7-rootfs.mount: Deactivated successfully. Jan 13 21:32:19.261268 kubelet[3350]: I0113 21:32:19.261206 3350 scope.go:117] "RemoveContainer" containerID="d389acc28fef3e3c0b835ec3e90c1657eaf60cd4403f6594b7f73aae8e836ee7" Jan 13 21:32:19.262455 kubelet[3350]: I0113 21:32:19.262420 3350 scope.go:117] "RemoveContainer" containerID="44527eafb553897c12ea1dbd353f1d1a886fb67336a0a4b36f7ca10cf11218d7" Jan 13 21:32:19.285848 containerd[1971]: time="2025-01-13T21:32:19.285766025Z" level=info msg="CreateContainer within sandbox \"b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:32:19.286457 containerd[1971]: time="2025-01-13T21:32:19.285786867Z" level=info msg="CreateContainer within sandbox \"589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 21:32:19.335675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357222163.mount: Deactivated successfully. Jan 13 21:32:19.340332 containerd[1971]: time="2025-01-13T21:32:19.340290116Z" level=info msg="CreateContainer within sandbox \"589544544c9a168f6fad07df9210aef7c3c4739ba95aff13e81aaeadeb52bda9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"62e3457938221df34f4c81c611b11a5510037cfc2c67fd3c9ec528840b94f8c8\"" Jan 13 21:32:19.341988 containerd[1971]: time="2025-01-13T21:32:19.341953271Z" level=info msg="StartContainer for \"62e3457938221df34f4c81c611b11a5510037cfc2c67fd3c9ec528840b94f8c8\"" Jan 13 21:32:19.387173 containerd[1971]: time="2025-01-13T21:32:19.387125042Z" level=info msg="CreateContainer within sandbox \"b82e7c1eb137fcfad0d6c3bdf78322c280bf5e94acf128b3ef89f45870bda43a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"57b4e9679a9f576ec0d490c2671072651bb7d0ee15ecb6c2e48b62f1d6556765\"" Jan 13 21:32:19.387692 containerd[1971]: time="2025-01-13T21:32:19.387654932Z" level=info msg="StartContainer for \"57b4e9679a9f576ec0d490c2671072651bb7d0ee15ecb6c2e48b62f1d6556765\"" Jan 13 21:32:19.391762 systemd[1]: Started cri-containerd-62e3457938221df34f4c81c611b11a5510037cfc2c67fd3c9ec528840b94f8c8.scope - libcontainer container 62e3457938221df34f4c81c611b11a5510037cfc2c67fd3c9ec528840b94f8c8. Jan 13 21:32:19.502166 containerd[1971]: time="2025-01-13T21:32:19.501837001Z" level=info msg="StartContainer for \"62e3457938221df34f4c81c611b11a5510037cfc2c67fd3c9ec528840b94f8c8\" returns successfully" Jan 13 21:32:19.506888 systemd[1]: Started cri-containerd-57b4e9679a9f576ec0d490c2671072651bb7d0ee15ecb6c2e48b62f1d6556765.scope - libcontainer container 57b4e9679a9f576ec0d490c2671072651bb7d0ee15ecb6c2e48b62f1d6556765. Jan 13 21:32:19.593626 containerd[1971]: time="2025-01-13T21:32:19.593329807Z" level=info msg="StartContainer for \"57b4e9679a9f576ec0d490c2671072651bb7d0ee15ecb6c2e48b62f1d6556765\" returns successfully" Jan 13 21:32:23.613134 systemd[1]: cri-containerd-1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456.scope: Deactivated successfully. Jan 13 21:32:23.615520 systemd[1]: cri-containerd-1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456.scope: Consumed 1.744s CPU time, 17.0M memory peak, 0B memory swap peak. Jan 13 21:32:23.662664 containerd[1971]: time="2025-01-13T21:32:23.657485072Z" level=info msg="shim disconnected" id=1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456 namespace=k8s.io Jan 13 21:32:23.662664 containerd[1971]: time="2025-01-13T21:32:23.657557088Z" level=warning msg="cleaning up after shim disconnected" id=1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456 namespace=k8s.io Jan 13 21:32:23.662664 containerd[1971]: time="2025-01-13T21:32:23.657570696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:32:23.660631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456-rootfs.mount: Deactivated successfully. Jan 13 21:32:24.241548 kubelet[3350]: I0113 21:32:24.241485 3350 scope.go:117] "RemoveContainer" containerID="1b1fe8b4e3d0741f97077c292999e7b87045266889003c23e9485fe852c5c456" Jan 13 21:32:24.244312 containerd[1971]: time="2025-01-13T21:32:24.244267759Z" level=info msg="CreateContainer within sandbox \"1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:32:24.292068 containerd[1971]: time="2025-01-13T21:32:24.292022738Z" level=info msg="CreateContainer within sandbox \"1742c86dde411b3bc4b3fcbdbcb67dda1c9fabf9a0d0e51a8bacd1ef8eea9ec7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0\"" Jan 13 21:32:24.292570 containerd[1971]: time="2025-01-13T21:32:24.292541735Z" level=info msg="StartContainer for \"1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0\"" Jan 13 21:32:24.330309 kubelet[3350]: E0113 21:32:24.330247 3350 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-253?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:32:24.344964 systemd[1]: Started cri-containerd-1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0.scope - libcontainer container 1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0. Jan 13 21:32:24.399560 containerd[1971]: time="2025-01-13T21:32:24.399493318Z" level=info msg="StartContainer for \"1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0\" returns successfully" Jan 13 21:32:24.650429 systemd[1]: run-containerd-runc-k8s.io-1b6098d82eec7a51b8d020212f739b24404efd28c2430956498621a57ea564b0-runc.qxZq7J.mount: Deactivated successfully.