Jan 29 12:03:58.110558 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:03:58.110600 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:58.110614 kernel: BIOS-provided physical RAM map: Jan 29 12:03:58.110625 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:03:58.110635 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:03:58.110645 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:03:58.110660 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 12:03:58.110670 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 12:03:58.110777 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 12:03:58.110788 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:03:58.110799 kernel: NX (Execute Disable) protection: active Jan 29 12:03:58.110810 kernel: APIC: Static calls initialized Jan 29 12:03:58.110822 kernel: SMBIOS 2.7 present. Jan 29 12:03:58.110833 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 12:03:58.110850 kernel: Hypervisor detected: KVM Jan 29 12:03:58.110862 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:03:58.110873 kernel: kvm-clock: using sched offset of 6478275382 cycles Jan 29 12:03:58.110886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:03:58.110898 kernel: tsc: Detected 2499.996 MHz processor Jan 29 12:03:58.110910 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:03:58.110923 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:03:58.110938 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 12:03:58.110950 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:03:58.110963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:03:58.110974 kernel: Using GB pages for direct mapping Jan 29 12:03:58.110986 kernel: ACPI: Early table checksum verification disabled Jan 29 12:03:58.110997 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 12:03:58.111010 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 12:03:58.111022 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 12:03:58.111034 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 12:03:58.111049 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 12:03:58.111062 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:58.111074 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 12:03:58.111086 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 12:03:58.111098 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 12:03:58.111109 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 12:03:58.111121 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 12:03:58.111133 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:58.111146 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 12:03:58.111161 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 12:03:58.112222 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 12:03:58.112246 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 12:03:58.112260 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 12:03:58.112274 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 12:03:58.112292 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 12:03:58.112304 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 12:03:58.112317 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 12:03:58.112330 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 12:03:58.112344 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:03:58.112447 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:03:58.112460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 12:03:58.112473 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 12:03:58.112486 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 12:03:58.112503 kernel: Zone ranges: Jan 29 12:03:58.112515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:03:58.112528 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 12:03:58.112541 kernel: Normal empty Jan 29 12:03:58.112554 kernel: Movable zone start for each node Jan 29 12:03:58.112567 kernel: Early memory node ranges Jan 29 12:03:58.112581 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:03:58.112594 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 12:03:58.112607 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 12:03:58.112621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:03:58.112637 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:03:58.112650 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 12:03:58.112662 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 12:03:58.112675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:03:58.112688 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 12:03:58.112700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:03:58.112712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:03:58.112725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:03:58.112738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:03:58.112754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:03:58.112767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:03:58.112781 kernel: TSC deadline timer available Jan 29 12:03:58.112794 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:03:58.112807 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:03:58.112820 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 12:03:58.112833 kernel: Booting paravirtualized kernel on KVM Jan 29 12:03:58.112847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:03:58.112860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:03:58.112876 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:03:58.112889 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:03:58.112902 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:03:58.112913 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:03:58.112926 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:03:58.112940 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:58.112954 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:03:58.113063 kernel: random: crng init done Jan 29 12:03:58.113082 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:03:58.113095 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:03:58.113109 kernel: Fallback order for Node 0: 0 Jan 29 12:03:58.113122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 12:03:58.113135 kernel: Policy zone: DMA32 Jan 29 12:03:58.114614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:03:58.114635 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 29 12:03:58.114649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:03:58.115890 kernel: Kernel/User page tables isolation: enabled Jan 29 12:03:58.115914 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:03:58.115927 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:03:58.115940 kernel: Dynamic Preempt: voluntary Jan 29 12:03:58.115952 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:03:58.115966 kernel: rcu: RCU event tracing is enabled. Jan 29 12:03:58.115980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:03:58.115993 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:03:58.116005 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:03:58.116018 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:03:58.116035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:03:58.116047 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:03:58.116059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:03:58.116072 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:03:58.116085 kernel: Console: colour VGA+ 80x25 Jan 29 12:03:58.116097 kernel: printk: console [ttyS0] enabled Jan 29 12:03:58.116110 kernel: ACPI: Core revision 20230628 Jan 29 12:03:58.116123 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 12:03:58.116136 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:03:58.116151 kernel: x2apic enabled Jan 29 12:03:58.116165 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:03:58.116253 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:58.116274 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 12:03:58.116288 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:03:58.116302 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:03:58.116316 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:03:58.116330 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:03:58.116344 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:03:58.116356 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:03:58.116371 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:03:58.116393 kernel: RETBleed: Vulnerable Jan 29 12:03:58.116409 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:03:58.116424 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:58.116438 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:58.116452 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:03:58.116465 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:03:58.116479 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:03:58.116493 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:03:58.116510 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 12:03:58.116524 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 12:03:58.116536 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:03:58.116550 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:03:58.116563 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:03:58.116576 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 12:03:58.116589 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:03:58.116602 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 12:03:58.116616 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 12:03:58.116629 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 12:03:58.116641 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 12:03:58.116658 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 12:03:58.116672 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 12:03:58.116686 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 12:03:58.116700 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:03:58.116711 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:03:58.116725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:03:58.116738 kernel: landlock: Up and running. Jan 29 12:03:58.116751 kernel: SELinux: Initializing. Jan 29 12:03:58.116764 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:58.116778 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:58.116791 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:03:58.116808 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:58.116822 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:58.116836 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:58.116850 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:03:58.116863 kernel: signal: max sigframe size: 3632 Jan 29 12:03:58.116876 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:03:58.116890 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:03:58.116904 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:03:58.116917 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:03:58.116934 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:03:58.116947 kernel: .... node #0, CPUs: #1 Jan 29 12:03:58.117033 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 12:03:58.117051 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:03:58.117065 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:03:58.117079 kernel: smpboot: Max logical packages: 1 Jan 29 12:03:58.117094 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 12:03:58.117107 kernel: devtmpfs: initialized Jan 29 12:03:58.117124 kernel: x86/mm: Memory block size: 128MB Jan 29 12:03:58.117137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:03:58.117151 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:03:58.117165 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:03:58.117190 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:03:58.117204 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:03:58.117218 kernel: audit: type=2000 audit(1738152236.726:1): state=initialized audit_enabled=0 res=1 Jan 29 12:03:58.117230 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:03:58.117244 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:03:58.117261 kernel: cpuidle: using governor menu Jan 29 12:03:58.117275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:03:58.117289 kernel: dca service started, version 1.12.1 Jan 29 12:03:58.117301 kernel: PCI: Using configuration type 1 for base access Jan 29 12:03:58.117315 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:03:58.117329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:03:58.117344 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:03:58.117357 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:03:58.117370 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:03:58.117386 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:03:58.117400 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:03:58.117414 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:03:58.117428 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:03:58.117442 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 12:03:58.117455 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:03:58.117469 kernel: ACPI: Interpreter enabled Jan 29 12:03:58.117481 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:03:58.117495 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:03:58.117510 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:03:58.117526 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:03:58.117540 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 12:03:58.117554 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:03:58.117806 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:03:58.117946 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:03:58.118073 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:03:58.118090 kernel: acpiphp: Slot [3] registered Jan 29 12:03:58.118107 kernel: acpiphp: Slot [4] registered Jan 29 12:03:58.118121 kernel: acpiphp: Slot [5] registered Jan 29 12:03:58.118135 kernel: acpiphp: Slot [6] registered Jan 29 12:03:58.118149 kernel: acpiphp: Slot [7] registered Jan 29 12:03:58.118161 kernel: acpiphp: Slot [8] registered Jan 29 12:03:58.123716 kernel: acpiphp: Slot [9] registered Jan 29 12:03:58.123760 kernel: acpiphp: Slot [10] registered Jan 29 12:03:58.123775 kernel: acpiphp: Slot [11] registered Jan 29 12:03:58.123790 kernel: acpiphp: Slot [12] registered Jan 29 12:03:58.123816 kernel: acpiphp: Slot [13] registered Jan 29 12:03:58.123830 kernel: acpiphp: Slot [14] registered Jan 29 12:03:58.123843 kernel: acpiphp: Slot [15] registered Jan 29 12:03:58.123857 kernel: acpiphp: Slot [16] registered Jan 29 12:03:58.123870 kernel: acpiphp: Slot [17] registered Jan 29 12:03:58.123884 kernel: acpiphp: Slot [18] registered Jan 29 12:03:58.123898 kernel: acpiphp: Slot [19] registered Jan 29 12:03:58.123912 kernel: acpiphp: Slot [20] registered Jan 29 12:03:58.123926 kernel: acpiphp: Slot [21] registered Jan 29 12:03:58.123940 kernel: acpiphp: Slot [22] registered Jan 29 12:03:58.123957 kernel: acpiphp: Slot [23] registered Jan 29 12:03:58.123971 kernel: acpiphp: Slot [24] registered Jan 29 12:03:58.123985 kernel: acpiphp: Slot [25] registered Jan 29 12:03:58.123998 kernel: acpiphp: Slot [26] registered Jan 29 12:03:58.124011 kernel: acpiphp: Slot [27] registered Jan 29 12:03:58.124024 kernel: acpiphp: Slot [28] registered Jan 29 12:03:58.124037 kernel: acpiphp: Slot [29] registered Jan 29 12:03:58.124051 kernel: acpiphp: Slot [30] registered Jan 29 12:03:58.124065 kernel: acpiphp: Slot [31] registered Jan 29 12:03:58.124081 kernel: PCI host bridge to bus 0000:00 Jan 29 12:03:58.125013 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:03:58.125165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:03:58.125414 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:03:58.125534 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 12:03:58.125652 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:03:58.125806 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:03:58.125951 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:03:58.126889 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 12:03:58.127054 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 12:03:58.127198 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 12:03:58.127328 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 12:03:58.127455 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 12:03:58.127581 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 12:03:58.127718 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 12:03:58.127844 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 12:03:58.127970 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 12:03:58.130062 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 12:03:58.130327 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 12:03:58.130532 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 12:03:58.130673 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:03:58.130822 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 12:03:58.130956 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 12:03:58.131090 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 12:03:58.132755 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 12:03:58.132785 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:03:58.132800 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:03:58.132853 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:03:58.132868 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:03:58.132883 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:03:58.132926 kernel: iommu: Default domain type: Translated Jan 29 12:03:58.132939 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:03:58.132953 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:03:58.133072 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:03:58.133087 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:03:58.133128 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 12:03:58.133469 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 12:03:58.133827 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 12:03:58.136365 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:03:58.136403 kernel: vgaarb: loaded Jan 29 12:03:58.136419 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 12:03:58.136435 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 12:03:58.136447 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:03:58.136461 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:03:58.136475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:03:58.136496 kernel: pnp: PnP ACPI init Jan 29 12:03:58.136511 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:03:58.136524 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:03:58.136537 kernel: NET: Registered PF_INET protocol family Jan 29 12:03:58.136551 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:03:58.136565 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 12:03:58.136579 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:03:58.136593 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:58.136610 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:03:58.136624 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 12:03:58.136637 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:58.136650 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:58.136664 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:03:58.136677 kernel: NET: Registered PF_XDP protocol family Jan 29 12:03:58.136810 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:03:58.136926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:03:58.137124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:03:58.138300 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 12:03:58.138580 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:03:58.138604 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:03:58.138619 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:03:58.138633 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:58.138648 kernel: clocksource: Switched to clocksource tsc Jan 29 12:03:58.138661 kernel: Initialise system trusted keyrings Jan 29 12:03:58.138740 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 12:03:58.138767 kernel: Key type asymmetric registered Jan 29 12:03:58.138782 kernel: Asymmetric key parser 'x509' registered Jan 29 12:03:58.138796 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:03:58.138810 kernel: io scheduler mq-deadline registered Jan 29 12:03:58.138824 kernel: io scheduler kyber registered Jan 29 12:03:58.138839 kernel: io scheduler bfq registered Jan 29 12:03:58.138853 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:03:58.138867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:03:58.138881 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:03:58.138899 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:03:58.138913 kernel: i8042: Warning: Keylock active Jan 29 12:03:58.138926 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:03:58.138940 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:03:58.139090 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 12:03:58.140338 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 12:03:58.140486 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T12:03:57 UTC (1738152237) Jan 29 12:03:58.140712 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 12:03:58.140741 kernel: intel_pstate: CPU model not supported Jan 29 12:03:58.140757 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:03:58.140771 kernel: Segment Routing with IPv6 Jan 29 12:03:58.140785 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:03:58.140799 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:03:58.140813 kernel: Key type dns_resolver registered Jan 29 12:03:58.140827 kernel: IPI shorthand broadcast: enabled Jan 29 12:03:58.140841 kernel: sched_clock: Marking stable (688020551, 388318085)->(1246765700, -170427064) Jan 29 12:03:58.140856 kernel: registered taskstats version 1 Jan 29 12:03:58.140873 kernel: Loading compiled-in X.509 certificates Jan 29 12:03:58.140887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:03:58.140900 kernel: Key type .fscrypt registered Jan 29 12:03:58.140914 kernel: Key type fscrypt-provisioning registered Jan 29 12:03:58.141081 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:03:58.141098 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:03:58.141113 kernel: ima: No architecture policies found Jan 29 12:03:58.141127 kernel: clk: Disabling unused clocks Jan 29 12:03:58.141141 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:03:58.141159 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:03:58.142188 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:03:58.142224 kernel: Run /init as init process Jan 29 12:03:58.142239 kernel: with arguments: Jan 29 12:03:58.142253 kernel: /init Jan 29 12:03:58.142266 kernel: with environment: Jan 29 12:03:58.142279 kernel: HOME=/ Jan 29 12:03:58.142292 kernel: TERM=linux Jan 29 12:03:58.142306 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:03:58.142331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:58.142361 systemd[1]: Detected virtualization amazon. Jan 29 12:03:58.142379 systemd[1]: Detected architecture x86-64. Jan 29 12:03:58.142395 systemd[1]: Running in initrd. Jan 29 12:03:58.142409 systemd[1]: No hostname configured, using default hostname. Jan 29 12:03:58.142427 systemd[1]: Hostname set to . Jan 29 12:03:58.142443 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:03:58.142458 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:03:58.142473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:58.142488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:58.142503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:03:58.142517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:58.142530 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:03:58.142548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:03:58.142564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:03:58.142580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:03:58.142594 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:58.142609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:58.142624 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:58.142641 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:58.142656 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:58.142670 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:58.142684 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:58.142699 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:58.142713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:03:58.142728 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:03:58.142743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:58.142758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:58.142776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:58.142791 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:58.142806 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:03:58.142821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:58.142836 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:03:58.142855 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:03:58.142872 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:03:58.142894 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:58.142909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:58.142924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:58.142940 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:58.142955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:58.142971 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:03:58.142991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:58.143041 systemd-journald[178]: Collecting audit messages is disabled. Jan 29 12:03:58.143075 systemd-journald[178]: Journal started Jan 29 12:03:58.143108 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2935afa24e7703bbc11068095cd52b) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:03:58.144398 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:58.128305 systemd-modules-load[179]: Inserted module 'overlay' Jan 29 12:03:58.250452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:03:58.250484 kernel: Bridge firewalling registered Jan 29 12:03:58.182963 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 29 12:03:58.259788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:58.261921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:58.273447 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:58.280841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:58.286653 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:58.290373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:58.304727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:58.318376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:58.328535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:58.336502 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:03:58.341731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:58.345851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:58.356439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:58.370728 dracut-cmdline[210]: dracut-dracut-053 Jan 29 12:03:58.375921 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:58.433935 systemd-resolved[215]: Positive Trust Anchors: Jan 29 12:03:58.433962 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:58.434023 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:58.452289 systemd-resolved[215]: Defaulting to hostname 'linux'. Jan 29 12:03:58.456781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:58.462102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:58.517207 kernel: SCSI subsystem initialized Jan 29 12:03:58.528217 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:03:58.545242 kernel: iscsi: registered transport (tcp) Jan 29 12:03:58.573201 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:03:58.573276 kernel: QLogic iSCSI HBA Driver Jan 29 12:03:58.623010 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:58.630739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:03:58.691142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:03:58.691299 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:03:58.691326 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:03:58.738212 kernel: raid6: avx512x4 gen() 14907 MB/s Jan 29 12:03:58.755225 kernel: raid6: avx512x2 gen() 15229 MB/s Jan 29 12:03:58.772208 kernel: raid6: avx512x1 gen() 15644 MB/s Jan 29 12:03:58.789209 kernel: raid6: avx2x4 gen() 15268 MB/s Jan 29 12:03:58.806211 kernel: raid6: avx2x2 gen() 11805 MB/s Jan 29 12:03:58.823397 kernel: raid6: avx2x1 gen() 11118 MB/s Jan 29 12:03:58.823473 kernel: raid6: using algorithm avx512x1 gen() 15644 MB/s Jan 29 12:03:58.842044 kernel: raid6: .... xor() 12503 MB/s, rmw enabled Jan 29 12:03:58.842126 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:03:58.869208 kernel: xor: automatically using best checksumming function avx Jan 29 12:03:59.112198 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:03:59.127119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:59.138747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:59.170024 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 29 12:03:59.178099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:59.218102 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:03:59.246922 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 29 12:03:59.300118 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:59.308344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:59.383725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:59.395369 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:03:59.437862 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:59.442766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:59.447515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:59.453928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:59.464520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:03:59.496980 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:59.548266 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 12:03:59.571711 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 12:03:59.572281 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 12:03:59.572469 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:03:59.572491 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:78:51:ed:d3:b1 Jan 29 12:03:59.573759 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:59.597783 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:03:59.597919 kernel: AES CTR mode by8 optimization enabled Jan 29 12:03:59.606734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:59.613025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:59.618207 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:59.620393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:59.620620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:59.622111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:59.681604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:59.699470 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 12:03:59.699781 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:03:59.717171 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 12:03:59.734271 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:03:59.734351 kernel: GPT:9289727 != 16777215 Jan 29 12:03:59.734372 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:03:59.734391 kernel: GPT:9289727 != 16777215 Jan 29 12:03:59.734407 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:03:59.734425 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:59.953324 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (457) Jan 29 12:04:00.007327 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 12:04:00.077330 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (461) Jan 29 12:04:00.089067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:00.118355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:04:00.139707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 12:04:00.149148 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 12:04:00.149331 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 12:04:00.179472 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:04:00.185068 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:04:00.191157 disk-uuid[623]: Primary Header is updated. Jan 29 12:04:00.191157 disk-uuid[623]: Secondary Entries is updated. Jan 29 12:04:00.191157 disk-uuid[623]: Secondary Header is updated. Jan 29 12:04:00.208361 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:04:00.224279 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:04:00.251049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:01.229249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:04:01.233235 disk-uuid[624]: The operation has completed successfully. Jan 29 12:04:01.612842 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:04:01.613004 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:04:01.678441 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:04:01.690583 sh[892]: Success Jan 29 12:04:01.721215 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:04:01.888795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:04:01.901326 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:04:01.906911 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:04:01.965694 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:04:01.965773 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:04:01.965808 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:04:01.969847 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:04:01.969928 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:04:02.021279 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:04:02.025959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:04:02.032314 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:04:02.055704 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:04:02.075359 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:04:02.094528 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:02.094602 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:04:02.094622 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:04:02.099212 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:04:02.115211 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:02.115195 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:04:02.176028 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:04:02.186485 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:04:02.242151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:04:02.253706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:04:02.307401 systemd-networkd[1085]: lo: Link UP Jan 29 12:04:02.307412 systemd-networkd[1085]: lo: Gained carrier Jan 29 12:04:02.333929 systemd-networkd[1085]: Enumeration completed Jan 29 12:04:02.341251 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:04:02.348439 systemd[1]: Reached target network.target - Network. Jan 29 12:04:02.349309 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:02.349315 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:02.363998 systemd-networkd[1085]: eth0: Link UP Jan 29 12:04:02.364004 systemd-networkd[1085]: eth0: Gained carrier Jan 29 12:04:02.364020 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:02.392274 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.19.14/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:04:02.435458 ignition[1025]: Ignition 2.19.0 Jan 29 12:04:02.435840 ignition[1025]: Stage: fetch-offline Jan 29 12:04:02.436131 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:02.440085 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:04:02.436144 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:02.436515 ignition[1025]: Ignition finished successfully Jan 29 12:04:02.455549 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:04:02.484476 ignition[1095]: Ignition 2.19.0 Jan 29 12:04:02.484491 ignition[1095]: Stage: fetch Jan 29 12:04:02.484953 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:02.484967 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:02.485079 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:02.522263 ignition[1095]: PUT result: OK Jan 29 12:04:02.527417 ignition[1095]: parsed url from cmdline: "" Jan 29 12:04:02.527429 ignition[1095]: no config URL provided Jan 29 12:04:02.527439 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:04:02.527454 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:04:02.527485 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:02.529052 ignition[1095]: PUT result: OK Jan 29 12:04:02.529132 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 12:04:02.543526 ignition[1095]: GET result: OK Jan 29 12:04:02.543674 ignition[1095]: parsing config with SHA512: 19fd9c5f0e4f65d4de3d22df1c570c2c925cdb3beba20eea46757958ee4ce37044cc419fcee6d63bd81def82589147029e662cb98577c9f81c4e9e1163921916 Jan 29 12:04:02.567572 unknown[1095]: fetched base config from "system" Jan 29 12:04:02.567587 unknown[1095]: fetched base config from "system" Jan 29 12:04:02.568599 ignition[1095]: fetch: fetch complete Jan 29 12:04:02.567595 unknown[1095]: fetched user config from "aws" Jan 29 12:04:02.568613 ignition[1095]: fetch: fetch passed Jan 29 12:04:02.573799 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:04:02.568681 ignition[1095]: Ignition finished successfully Jan 29 12:04:02.585031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:04:02.627358 ignition[1101]: Ignition 2.19.0 Jan 29 12:04:02.627375 ignition[1101]: Stage: kargs Jan 29 12:04:02.628289 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:02.628303 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:02.628640 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:02.630542 ignition[1101]: PUT result: OK Jan 29 12:04:02.647634 ignition[1101]: kargs: kargs passed Jan 29 12:04:02.647751 ignition[1101]: Ignition finished successfully Jan 29 12:04:02.654982 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:04:02.674456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:04:02.739863 ignition[1108]: Ignition 2.19.0 Jan 29 12:04:02.739879 ignition[1108]: Stage: disks Jan 29 12:04:02.740492 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:02.740508 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:02.740638 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:02.745121 ignition[1108]: PUT result: OK Jan 29 12:04:02.753286 ignition[1108]: disks: disks passed Jan 29 12:04:02.753386 ignition[1108]: Ignition finished successfully Jan 29 12:04:02.759995 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:04:02.763133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:04:02.768610 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:04:02.773873 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:04:02.776431 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:04:02.791020 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:04:02.805509 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:04:02.863656 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:04:02.875261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:04:02.890948 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:04:03.093221 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:04:03.094073 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:04:03.094796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:04:03.107651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:04:03.111483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:04:03.114921 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:04:03.115004 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:04:03.115116 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:04:03.136320 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:04:03.147568 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:04:03.154512 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1135) Jan 29 12:04:03.154547 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:03.154575 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:04:03.156289 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:04:03.161265 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:04:03.164938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:04:03.434701 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:04:03.459041 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:04:03.471445 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:04:03.480696 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:04:03.790304 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:03.805415 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:04:03.810384 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:04:03.833650 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:04:03.837328 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:03.883460 systemd-networkd[1085]: eth0: Gained IPv6LL Jan 29 12:04:03.887121 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:04:03.893408 ignition[1249]: INFO : Ignition 2.19.0 Jan 29 12:04:03.893408 ignition[1249]: INFO : Stage: mount Jan 29 12:04:03.893408 ignition[1249]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:03.893408 ignition[1249]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:03.899542 ignition[1249]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:03.899542 ignition[1249]: INFO : PUT result: OK Jan 29 12:04:03.902518 ignition[1249]: INFO : mount: mount passed Jan 29 12:04:03.903332 ignition[1249]: INFO : Ignition finished successfully Jan 29 12:04:03.906191 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:04:03.916523 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:04:04.106522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:04:04.133205 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1260) Jan 29 12:04:04.135995 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:04:04.136064 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:04:04.136085 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:04:04.142221 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:04:04.142992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:04:04.172662 ignition[1277]: INFO : Ignition 2.19.0 Jan 29 12:04:04.172662 ignition[1277]: INFO : Stage: files Jan 29 12:04:04.174827 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:04.174827 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:04.177891 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:04.180748 ignition[1277]: INFO : PUT result: OK Jan 29 12:04:04.185832 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:04:04.214475 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:04:04.214475 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:04:04.234522 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:04:04.236521 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:04:04.239083 unknown[1277]: wrote ssh authorized keys file for user: core Jan 29 12:04:04.241701 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:04:04.243755 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:04:04.245828 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:04:04.245828 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:04:04.245828 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:04:04.397897 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:04:04.547949 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:04:04.551464 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:04.554208 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:04.554208 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:04.562454 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:04.562454 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:04.572479 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:04.579756 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:04:04.589336 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:04:05.094677 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:04:05.477342 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:04:05.477342 ignition[1277]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 12:04:05.483328 ignition[1277]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:04:05.486768 ignition[1277]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:04:05.486768 ignition[1277]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 12:04:05.486768 ignition[1277]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:05.493943 ignition[1277]: INFO : files: files passed Jan 29 12:04:05.493943 ignition[1277]: INFO : Ignition finished successfully Jan 29 12:04:05.499545 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:04:05.521646 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:04:05.527436 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:04:05.535034 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:04:05.535168 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:04:05.563608 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:05.563608 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:05.568802 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:05.571775 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:05.575040 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:04:05.585423 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:04:05.626039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:04:05.626156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:04:05.630131 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:04:05.632636 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:04:05.634946 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:04:05.638402 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:04:05.671110 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:05.686882 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:04:05.721327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:05.724050 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:05.737649 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:04:05.756793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:04:05.767695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:05.773774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:04:05.778198 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:04:05.780962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:04:05.783754 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:04:05.799148 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:04:05.803691 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:04:05.805825 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:04:05.807773 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:04:05.814496 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:04:05.819283 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:04:05.821544 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:04:05.821729 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:04:05.827873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:05.830992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:05.834339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:04:05.837161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:05.839946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:04:05.840417 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:04:05.845155 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:04:05.846854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:05.850170 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:04:05.851485 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:04:05.859484 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:04:05.861716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:04:05.861944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:05.873654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:04:05.875357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:04:05.875580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:05.878530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:04:05.878687 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:04:05.887553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:04:05.887693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:04:05.911517 ignition[1330]: INFO : Ignition 2.19.0 Jan 29 12:04:05.911517 ignition[1330]: INFO : Stage: umount Jan 29 12:04:05.915857 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:05.915857 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:04:05.915857 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:04:05.915857 ignition[1330]: INFO : PUT result: OK Jan 29 12:04:05.921313 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:04:05.923750 ignition[1330]: INFO : umount: umount passed Jan 29 12:04:05.925016 ignition[1330]: INFO : Ignition finished successfully Jan 29 12:04:05.926393 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:04:05.926588 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:04:05.929067 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:04:05.929231 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:04:05.932303 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:04:05.932424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:04:05.936604 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:04:05.936679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:04:05.939558 systemd[1]: Stopped target network.target - Network. Jan 29 12:04:05.942489 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:04:05.943518 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:04:05.946874 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:04:05.949987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:04:05.951122 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:05.955169 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:04:05.957418 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:04:05.959797 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:04:05.959852 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:04:05.963962 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:04:05.964017 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:04:05.968960 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:04:05.969034 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:04:05.972778 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:04:05.972847 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:04:05.974596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:04:05.977087 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:04:05.979037 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:04:05.979129 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:04:05.980096 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:04:05.980194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:05.984897 systemd-networkd[1085]: eth0: DHCPv6 lease lost Jan 29 12:04:05.987659 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:04:05.987769 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:04:05.998734 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:04:05.999394 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:04:06.015278 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:04:06.015555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:06.024580 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:04:06.028333 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:04:06.028475 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:04:06.032422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:04:06.032519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:06.034167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:04:06.034349 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:06.037978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:04:06.038084 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:06.045046 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:06.094843 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:04:06.096843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:06.102491 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:04:06.102693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:04:06.104940 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:04:06.105110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:06.107861 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:04:06.107904 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:06.109216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:04:06.109287 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:04:06.113399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:04:06.113561 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:04:06.116582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:04:06.116640 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:06.130460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:04:06.136627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:04:06.136725 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:06.142481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:04:06.142556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:06.147731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:04:06.147863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:04:06.148867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:04:06.163960 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:04:06.206817 systemd[1]: Switching root. Jan 29 12:04:06.280229 systemd-journald[178]: Journal stopped Jan 29 12:04:08.886475 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 29 12:04:08.886583 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:04:08.886611 kernel: SELinux: policy capability open_perms=1 Jan 29 12:04:08.886634 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:04:08.886653 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:04:08.886671 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:04:08.886696 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:04:08.886718 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:04:08.886739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:04:08.886760 kernel: audit: type=1403 audit(1738152247.449:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:04:08.886783 systemd[1]: Successfully loaded SELinux policy in 57.501ms. Jan 29 12:04:08.886814 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.209ms. Jan 29 12:04:08.886843 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:04:08.886867 systemd[1]: Detected virtualization amazon. Jan 29 12:04:08.887011 systemd[1]: Detected architecture x86-64. Jan 29 12:04:08.887034 systemd[1]: Detected first boot. Jan 29 12:04:08.887054 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:04:08.887076 zram_generator::config[1392]: No configuration found. Jan 29 12:04:08.887104 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:04:08.887128 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:04:08.887158 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 12:04:08.892366 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:04:08.892410 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:04:08.892438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:04:08.892456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:04:08.892482 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:04:08.892500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:04:08.892560 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:04:08.892585 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:04:08.892615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:08.892636 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:08.892660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:04:08.892685 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:04:08.892713 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:04:08.892739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:04:08.892764 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:04:08.892788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:08.892814 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:04:08.892844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:08.892869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:04:08.892894 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:04:08.892918 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:04:08.892944 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:04:08.892969 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:04:08.892993 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:04:08.893018 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:04:08.893047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:08.893072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:08.893096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:08.893123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:04:08.893148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:04:08.895298 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:04:08.895363 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:04:08.895388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:08.895412 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:04:08.895442 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:04:08.895465 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:04:08.895487 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:04:08.895510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:08.895533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:04:08.895555 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:04:08.895579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:08.895602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:08.895629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:08.895652 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:04:08.895674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:08.895697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:04:08.895720 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:04:08.895743 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:04:08.895765 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:04:08.895788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:04:08.895811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:04:08.895843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:04:08.895867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:04:08.895890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:08.895914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:04:08.895936 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:04:08.895958 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:04:08.895980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:04:08.896003 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:04:08.896030 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:04:08.896053 kernel: loop: module loaded Jan 29 12:04:08.896076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:04:08.896099 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:08.896121 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:04:08.896145 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:04:08.896165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:08.901257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:08.901294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:08.901327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:08.901350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:08.901372 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:04:08.901396 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:04:08.901420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:04:08.901450 kernel: ACPI: bus type drm_connector registered Jan 29 12:04:08.901475 kernel: fuse: init (API version 7.39) Jan 29 12:04:08.901494 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:04:08.901517 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:04:08.901581 systemd-journald[1497]: Collecting audit messages is disabled. Jan 29 12:04:08.901627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:04:08.901652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:08.901756 systemd-journald[1497]: Journal started Jan 29 12:04:08.901804 systemd-journald[1497]: Runtime Journal (/run/log/journal/ec2935afa24e7703bbc11068095cd52b) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:04:08.926207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:04:08.935200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:04:08.958205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:04:08.964205 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:04:08.971932 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:08.972167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:08.975701 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:04:08.976038 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:04:08.978038 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:08.978302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:08.979965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:04:08.996782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:04:09.022674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:04:09.034042 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 12:04:09.034073 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 12:04:09.044672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:04:09.060542 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:04:09.062558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:09.063371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:04:09.092606 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:04:09.097959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:04:09.116636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:09.123531 systemd-journald[1497]: Time spent on flushing to /var/log/journal/ec2935afa24e7703bbc11068095cd52b is 114.587ms for 953 entries. Jan 29 12:04:09.123531 systemd-journald[1497]: System Journal (/var/log/journal/ec2935afa24e7703bbc11068095cd52b) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:04:09.251437 systemd-journald[1497]: Received client request to flush runtime journal. Jan 29 12:04:09.149803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:09.167609 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:04:09.240601 udevadm[1554]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:04:09.256578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:04:09.259813 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:04:09.273254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:04:09.320718 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jan 29 12:04:09.320749 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Jan 29 12:04:09.329032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:10.257923 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:04:10.269422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:10.309669 systemd-udevd[1566]: Using default interface naming scheme 'v255'. Jan 29 12:04:10.359842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:10.374152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:04:10.417412 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:04:10.450510 (udev-worker)[1582]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:10.530441 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:04:10.531765 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:04:10.597239 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 12:04:10.605217 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:04:10.613209 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:04:10.613294 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 12:04:10.616597 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 29 12:04:10.616671 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 12:04:10.640282 systemd-networkd[1570]: lo: Link UP Jan 29 12:04:10.640295 systemd-networkd[1570]: lo: Gained carrier Jan 29 12:04:10.642657 systemd-networkd[1570]: Enumeration completed Jan 29 12:04:10.642921 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:04:10.647003 systemd-networkd[1570]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:10.649344 systemd-networkd[1570]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:10.655401 systemd-networkd[1570]: eth0: Link UP Jan 29 12:04:10.655785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:04:10.660061 systemd-networkd[1570]: eth0: Gained carrier Jan 29 12:04:10.661254 systemd-networkd[1570]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:10.676423 systemd-networkd[1570]: eth0: DHCPv4 address 172.31.19.14/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:04:10.686863 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:04:10.705229 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1572) Jan 29 12:04:10.725633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:10.891562 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:04:10.915783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:04:10.926684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:04:11.075378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:11.108149 lvm[1688]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:11.153642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:04:11.159398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:11.177889 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:04:11.219318 lvm[1693]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:11.265519 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:04:11.269933 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:04:11.274048 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:04:11.274101 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:04:11.276668 systemd[1]: Reached target machines.target - Containers. Jan 29 12:04:11.281971 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:04:11.297639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:04:11.303393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:04:11.305277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:11.307427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:04:11.318374 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:04:11.341243 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:04:11.351864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:04:11.393965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:04:11.402230 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:04:11.458407 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:04:11.460076 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:04:11.506559 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:04:11.549214 kernel: loop1: detected capacity change from 0 to 61336 Jan 29 12:04:11.630211 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 12:04:11.690497 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 12:04:11.763207 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:04:11.800213 kernel: loop5: detected capacity change from 0 to 61336 Jan 29 12:04:11.828664 kernel: loop6: detected capacity change from 0 to 140768 Jan 29 12:04:11.863222 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 12:04:11.878608 (sd-merge)[1716]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 12:04:11.879784 (sd-merge)[1716]: Merged extensions into '/usr'. Jan 29 12:04:11.896255 systemd[1]: Reloading requested from client PID 1701 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:04:11.896279 systemd[1]: Reloading... Jan 29 12:04:12.077262 zram_generator::config[1747]: No configuration found. Jan 29 12:04:12.309941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:12.426206 ldconfig[1697]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:04:12.439244 systemd[1]: Reloading finished in 541 ms. Jan 29 12:04:12.458401 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:04:12.460360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:04:12.472516 systemd[1]: Starting ensure-sysext.service... Jan 29 12:04:12.481378 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:04:12.493729 systemd[1]: Reloading requested from client PID 1800 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:04:12.493891 systemd[1]: Reloading... Jan 29 12:04:12.511546 systemd-tmpfiles[1801]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:04:12.512070 systemd-tmpfiles[1801]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:04:12.513604 systemd-tmpfiles[1801]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:04:12.514131 systemd-tmpfiles[1801]: ACLs are not supported, ignoring. Jan 29 12:04:12.514236 systemd-tmpfiles[1801]: ACLs are not supported, ignoring. Jan 29 12:04:12.517821 systemd-tmpfiles[1801]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:12.517838 systemd-tmpfiles[1801]: Skipping /boot Jan 29 12:04:12.533296 systemd-tmpfiles[1801]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:12.533311 systemd-tmpfiles[1801]: Skipping /boot Jan 29 12:04:12.646207 zram_generator::config[1831]: No configuration found. Jan 29 12:04:12.715302 systemd-networkd[1570]: eth0: Gained IPv6LL Jan 29 12:04:12.798521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:12.888298 systemd[1]: Reloading finished in 393 ms. Jan 29 12:04:12.915619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:04:12.929050 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:12.938266 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:04:12.950668 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:04:12.960510 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:04:12.970426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:04:12.987106 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:04:13.006385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.006698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:13.011530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:13.016766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:13.035518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:13.037092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:13.037305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.047667 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:04:13.065210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.065672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:13.066006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:13.083642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:04:13.084949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.087972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:13.088238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:13.096106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:13.099081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:13.103112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:13.107545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:13.119939 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:04:13.136959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.137396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:13.138484 augenrules[1925]: No rules Jan 29 12:04:13.143671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:13.164598 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:13.173576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:13.194437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:13.195776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:13.196052 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:04:13.198732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:04:13.203889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:04:13.209115 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:04:13.215763 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:04:13.217749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:13.218141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:13.221875 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:13.222050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:13.223525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:13.223676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:13.225755 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:13.225956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:13.227244 systemd-resolved[1893]: Positive Trust Anchors: Jan 29 12:04:13.227530 systemd-resolved[1893]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:04:13.227608 systemd-resolved[1893]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:04:13.232697 systemd[1]: Finished ensure-sysext.service. Jan 29 12:04:13.239021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:13.239092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:13.239115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:04:13.255497 systemd-resolved[1893]: Defaulting to hostname 'linux'. Jan 29 12:04:13.257629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:04:13.259752 systemd[1]: Reached target network.target - Network. Jan 29 12:04:13.264101 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:04:13.268982 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:13.273481 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:04:13.276966 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:04:13.281888 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:04:13.285739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:04:13.287441 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:04:13.288911 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:04:13.290203 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:04:13.290247 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:04:13.291207 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:04:13.292796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:04:13.295577 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:04:13.298381 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:04:13.314871 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:04:13.316634 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:04:13.328018 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:04:13.333508 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:04:13.335721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:13.335770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:13.342299 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:04:13.371463 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:04:13.377092 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:04:13.390127 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:04:13.396457 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:04:13.397749 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:04:13.438506 jq[1957]: false Jan 29 12:04:13.438882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:13.449104 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:04:13.481854 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 12:04:13.504406 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:04:13.519326 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:04:13.522500 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 12:04:13.523071 dbus-daemon[1956]: [system] SELinux support is enabled Jan 29 12:04:13.527885 dbus-daemon[1956]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1570 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:04:13.532221 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:04:13.554386 extend-filesystems[1958]: Found loop4 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found loop5 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found loop6 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found loop7 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p1 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p2 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p3 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found usr Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p4 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p6 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p7 Jan 29 12:04:13.554386 extend-filesystems[1958]: Found nvme0n1p9 Jan 29 12:04:13.554386 extend-filesystems[1958]: Checking size of /dev/nvme0n1p9 Jan 29 12:04:13.597336 extend-filesystems[1958]: Resized partition /dev/nvme0n1p9 Jan 29 12:04:13.560426 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:04:13.627207 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 12:04:13.627291 extend-filesystems[1988]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:04:13.615379 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:04:13.617126 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:04:13.632561 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:04:13.643978 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:04:13.648637 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:04:13.679872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:04:13.681288 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:04:13.685906 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:04:13.690156 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:04:13.697223 coreos-metadata[1954]: Jan 29 12:04:13.694 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:04:13.705366 coreos-metadata[1954]: Jan 29 12:04:13.699 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 12:04:13.709732 coreos-metadata[1954]: Jan 29 12:04:13.708 INFO Fetch successful Jan 29 12:04:13.709732 coreos-metadata[1954]: Jan 29 12:04:13.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 12:04:13.709732 coreos-metadata[1954]: Jan 29 12:04:13.709 INFO Fetch successful Jan 29 12:04:13.709732 coreos-metadata[1954]: Jan 29 12:04:13.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 12:04:13.715281 jq[1993]: true Jan 29 12:04:13.715103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:04:13.717536 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:04:13.719430 coreos-metadata[1954]: Jan 29 12:04:13.719 INFO Fetch successful Jan 29 12:04:13.719430 coreos-metadata[1954]: Jan 29 12:04:13.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 12:04:13.720403 coreos-metadata[1954]: Jan 29 12:04:13.720 INFO Fetch successful Jan 29 12:04:13.729938 coreos-metadata[1954]: Jan 29 12:04:13.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 12:04:13.730153 coreos-metadata[1954]: Jan 29 12:04:13.730 INFO Fetch failed with 404: resource not found Jan 29 12:04:13.730214 coreos-metadata[1954]: Jan 29 12:04:13.730 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 12:04:13.731244 coreos-metadata[1954]: Jan 29 12:04:13.731 INFO Fetch successful Jan 29 12:04:13.733209 coreos-metadata[1954]: Jan 29 12:04:13.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: ---------------------------------------------------- Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: corporation. Support and training for ntp-4 are Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: available at https://www.nwtime.org/support Jan 29 12:04:13.733329 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: ---------------------------------------------------- Jan 29 12:04:13.731855 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:04:13.752864 coreos-metadata[1954]: Jan 29 12:04:13.750 INFO Fetch successful Jan 29 12:04:13.752864 coreos-metadata[1954]: Jan 29 12:04:13.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: proto: precision = 0.063 usec (-24) Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: basedate set to 2025-01-17 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: gps base set to 2025-01-19 (week 2350) Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen normally on 3 eth0 172.31.19.14:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen normally on 4 lo [::1]:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listen normally on 5 eth0 [fe80::478:51ff:feed:d3b1%2]:123 Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: Listening on routing socket on fd #22 for interface updates Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:13.752949 ntpd[1964]: 29 Jan 12:04:13 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:13.755437 update_engine[1991]: I20250129 12:04:13.740748 1991 main.cc:92] Flatcar Update Engine starting Jan 29 12:04:13.755437 update_engine[1991]: I20250129 12:04:13.747946 1991 update_check_scheduler.cc:74] Next update check in 3m5s Jan 29 12:04:13.759504 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 12:04:13.731884 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:04:13.818047 coreos-metadata[1954]: Jan 29 12:04:13.755 INFO Fetch successful Jan 29 12:04:13.818047 coreos-metadata[1954]: Jan 29 12:04:13.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 12:04:13.818047 coreos-metadata[1954]: Jan 29 12:04:13.770 INFO Fetch successful Jan 29 12:04:13.818047 coreos-metadata[1954]: Jan 29 12:04:13.770 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 12:04:13.818047 coreos-metadata[1954]: Jan 29 12:04:13.772 INFO Fetch successful Jan 29 12:04:13.806719 (ntainerd)[2010]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:04:13.731894 ntpd[1964]: ---------------------------------------------------- Jan 29 12:04:13.824761 extend-filesystems[1988]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 12:04:13.824761 extend-filesystems[1988]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:04:13.824761 extend-filesystems[1988]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 12:04:13.812003 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:04:13.731903 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:04:13.851835 extend-filesystems[1958]: Resized filesystem in /dev/nvme0n1p9 Jan 29 12:04:13.829812 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:04:13.731913 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:04:13.830167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:04:13.731924 ntpd[1964]: corporation. Support and training for ntp-4 are Jan 29 12:04:13.731933 ntpd[1964]: available at https://www.nwtime.org/support Jan 29 12:04:13.859146 jq[2005]: true Jan 29 12:04:13.731944 ntpd[1964]: ---------------------------------------------------- Jan 29 12:04:13.736266 ntpd[1964]: proto: precision = 0.063 usec (-24) Jan 29 12:04:13.736641 ntpd[1964]: basedate set to 2025-01-17 Jan 29 12:04:13.736657 ntpd[1964]: gps base set to 2025-01-19 (week 2350) Jan 29 12:04:13.742134 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:04:13.742224 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:04:13.742441 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:04:13.742484 ntpd[1964]: Listen normally on 3 eth0 172.31.19.14:123 Jan 29 12:04:13.742527 ntpd[1964]: Listen normally on 4 lo [::1]:123 Jan 29 12:04:13.742573 ntpd[1964]: Listen normally on 5 eth0 [fe80::478:51ff:feed:d3b1%2]:123 Jan 29 12:04:13.742616 ntpd[1964]: Listening on routing socket on fd #22 for interface updates Jan 29 12:04:13.751212 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:13.751248 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:04:13.909047 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:04:13.944656 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:04:13.950516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:04:13.950559 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:04:13.979256 tar[1999]: linux-amd64/helm Jan 29 12:04:13.966375 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:04:13.967875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:04:13.967904 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:04:13.970094 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:04:13.980395 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:04:13.996745 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:04:14.142351 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2054) Jan 29 12:04:14.142149 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 12:04:14.164425 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 12:04:14.166045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:04:14.184778 systemd-logind[1987]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:04:14.197737 systemd-logind[1987]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 29 12:04:14.197769 systemd-logind[1987]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:04:14.198120 systemd-logind[1987]: New seat seat0. Jan 29 12:04:14.204516 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:04:14.298686 bash[2080]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:14.301059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:04:14.316542 systemd[1]: Starting sshkeys.service... Jan 29 12:04:14.342362 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:04:14.352128 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:04:14.519464 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:04:14.519942 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:04:14.520556 dbus-daemon[1956]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2047 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:04:14.521717 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:04:14.534798 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:04:14.663964 polkitd[2156]: Started polkitd version 121 Jan 29 12:04:14.673571 amazon-ssm-agent[2075]: Initializing new seelog logger Jan 29 12:04:14.682501 amazon-ssm-agent[2075]: New Seelog Logger Creation Complete Jan 29 12:04:14.684275 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.684275 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.684275 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 processing appconfig overrides Jan 29 12:04:14.697071 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.697071 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.699516 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO Proxy environment variables: Jan 29 12:04:14.707647 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 processing appconfig overrides Jan 29 12:04:14.720747 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.720747 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.720747 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 processing appconfig overrides Jan 29 12:04:14.722434 coreos-metadata[2133]: Jan 29 12:04:14.718 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:04:14.730711 polkitd[2156]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:04:14.730807 polkitd[2156]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:04:14.736034 polkitd[2156]: Finished loading, compiling and executing 2 rules Jan 29 12:04:14.742307 coreos-metadata[2133]: Jan 29 12:04:14.742 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 12:04:14.746157 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.746157 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:04:14.746321 coreos-metadata[2133]: Jan 29 12:04:14.745 INFO Fetch successful Jan 29 12:04:14.746321 coreos-metadata[2133]: Jan 29 12:04:14.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:04:14.745711 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:04:14.746222 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:04:14.753388 amazon-ssm-agent[2075]: 2025/01/29 12:04:14 processing appconfig overrides Jan 29 12:04:14.758331 coreos-metadata[2133]: Jan 29 12:04:14.757 INFO Fetch successful Jan 29 12:04:14.766219 unknown[2133]: wrote ssh authorized keys file for user: core Jan 29 12:04:14.767005 polkitd[2156]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:04:14.809863 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO https_proxy: Jan 29 12:04:14.905705 update-ssh-keys[2190]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:14.907464 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:04:14.920895 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO http_proxy: Jan 29 12:04:14.921592 systemd[1]: Finished sshkeys.service. Jan 29 12:04:14.992478 systemd-resolved[1893]: System hostname changed to 'ip-172-31-19-14'. Jan 29 12:04:14.995350 systemd-hostnamed[2047]: Hostname set to (transient) Jan 29 12:04:15.027656 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO no_proxy: Jan 29 12:04:15.125334 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO Checking if agent identity type OnPrem can be assumed Jan 29 12:04:15.147627 containerd[2010]: time="2025-01-29T12:04:15.147051009Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:04:15.152144 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:04:15.229021 amazon-ssm-agent[2075]: 2025-01-29 12:04:14 INFO Checking if agent identity type EC2 can be assumed Jan 29 12:04:15.271046 containerd[2010]: time="2025-01-29T12:04:15.270896196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.274205 containerd[2010]: time="2025-01-29T12:04:15.273862273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:15.274205 containerd[2010]: time="2025-01-29T12:04:15.273915790Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:04:15.274205 containerd[2010]: time="2025-01-29T12:04:15.273940699Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:04:15.274752 containerd[2010]: time="2025-01-29T12:04:15.274545004Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:04:15.274752 containerd[2010]: time="2025-01-29T12:04:15.274580211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.274752 containerd[2010]: time="2025-01-29T12:04:15.274654929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:15.274752 containerd[2010]: time="2025-01-29T12:04:15.274675077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.275325 containerd[2010]: time="2025-01-29T12:04:15.275132805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:15.275707 containerd[2010]: time="2025-01-29T12:04:15.275424079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.275707 containerd[2010]: time="2025-01-29T12:04:15.275459947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:15.275707 containerd[2010]: time="2025-01-29T12:04:15.275478286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.275707 containerd[2010]: time="2025-01-29T12:04:15.275589751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.276566 containerd[2010]: time="2025-01-29T12:04:15.276535744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:15.277139 containerd[2010]: time="2025-01-29T12:04:15.276892342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:15.277139 containerd[2010]: time="2025-01-29T12:04:15.276920372Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:04:15.277139 containerd[2010]: time="2025-01-29T12:04:15.277045445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:04:15.277139 containerd[2010]: time="2025-01-29T12:04:15.277100548Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:04:15.291350 containerd[2010]: time="2025-01-29T12:04:15.291300412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:04:15.292706 containerd[2010]: time="2025-01-29T12:04:15.292227430Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:04:15.292706 containerd[2010]: time="2025-01-29T12:04:15.292311273Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:04:15.292706 containerd[2010]: time="2025-01-29T12:04:15.292346346Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:04:15.292706 containerd[2010]: time="2025-01-29T12:04:15.292378271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:04:15.292706 containerd[2010]: time="2025-01-29T12:04:15.292573631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.293816428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294003985Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294028766Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294051310Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294070672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294090493Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294107938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294128231Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.295248 containerd[2010]: time="2025-01-29T12:04:15.294150367Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.294172035Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295700560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295750367Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295790492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295813238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295831199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295935977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295960907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.295982171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.296001494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.296022517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.296045306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.296070979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296199 containerd[2010]: time="2025-01-29T12:04:15.296092452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296923 containerd[2010]: time="2025-01-29T12:04:15.296110802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296923 containerd[2010]: time="2025-01-29T12:04:15.296132492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.296923 containerd[2010]: time="2025-01-29T12:04:15.296162044Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298251042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298290689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298311025Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298375922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298403216Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298422844Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298442655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298458199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298476872Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298492585Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:04:15.301208 containerd[2010]: time="2025-01-29T12:04:15.298508485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:04:15.301813 containerd[2010]: time="2025-01-29T12:04:15.298929564Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:04:15.301813 containerd[2010]: time="2025-01-29T12:04:15.299026019Z" level=info msg="Connect containerd service" Jan 29 12:04:15.301813 containerd[2010]: time="2025-01-29T12:04:15.299084453Z" level=info msg="using legacy CRI server" Jan 29 12:04:15.301813 containerd[2010]: time="2025-01-29T12:04:15.299094721Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:04:15.301813 containerd[2010]: time="2025-01-29T12:04:15.299352924Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:04:15.307929 containerd[2010]: time="2025-01-29T12:04:15.307871343Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:04:15.311845 containerd[2010]: time="2025-01-29T12:04:15.311802173Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312160221Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312244908Z" level=info msg="Start subscribing containerd event" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312304091Z" level=info msg="Start recovering state" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312411164Z" level=info msg="Start event monitor" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312434127Z" level=info msg="Start snapshots syncer" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312449450Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312468300Z" level=info msg="Start streaming server" Jan 29 12:04:15.313200 containerd[2010]: time="2025-01-29T12:04:15.312546708Z" level=info msg="containerd successfully booted in 0.169008s" Jan 29 12:04:15.313744 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:04:15.327661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:04:15.346016 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO Agent will take identity from EC2 Jan 29 12:04:15.339735 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:04:15.366943 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:04:15.368065 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:04:15.380398 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:04:15.428782 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:15.433850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:04:15.450803 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:04:15.464137 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:04:15.469643 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:04:15.532118 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:15.631746 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:04:15.730833 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 12:04:15.738393 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 12:04:15.739661 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 12:04:15.739661 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 12:04:15.739661 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [Registrar] Starting registrar module Jan 29 12:04:15.740031 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 12:04:15.740031 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [EC2Identity] EC2 registration was successful. Jan 29 12:04:15.740031 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [CredentialRefresher] credentialRefresher has started Jan 29 12:04:15.740031 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 12:04:15.740031 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 12:04:15.832638 amazon-ssm-agent[2075]: 2025-01-29 12:04:15 INFO [CredentialRefresher] Next credential rotation will be in 30.108302993266665 minutes Jan 29 12:04:16.007632 tar[1999]: linux-amd64/LICENSE Jan 29 12:04:16.012142 tar[1999]: linux-amd64/README.md Jan 29 12:04:16.034558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:04:16.370419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:16.373899 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:04:16.384292 systemd[1]: Startup finished in 10.547s (kernel) + 8.990s (userspace) = 19.538s. Jan 29 12:04:16.527240 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:16.759345 amazon-ssm-agent[2075]: 2025-01-29 12:04:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 12:04:16.858954 amazon-ssm-agent[2075]: 2025-01-29 12:04:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2257) started Jan 29 12:04:16.961516 amazon-ssm-agent[2075]: 2025-01-29 12:04:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 12:04:17.439647 kubelet[2246]: E0129 12:04:17.439395 2246 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:17.445442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:17.445741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:21.106128 systemd-resolved[1893]: Clock change detected. Flushing caches. Jan 29 12:04:21.728694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:04:21.735693 systemd[1]: Started sshd@0-172.31.19.14:22-139.178.68.195:34670.service - OpenSSH per-connection server daemon (139.178.68.195:34670). Jan 29 12:04:21.912244 sshd[2272]: Accepted publickey for core from 139.178.68.195 port 34670 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:21.914765 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:21.936832 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:04:21.949238 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:04:21.959962 systemd-logind[1987]: New session 1 of user core. Jan 29 12:04:21.978250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:04:21.990272 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:04:21.996551 (systemd)[2278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:04:22.126942 systemd[2278]: Queued start job for default target default.target. Jan 29 12:04:22.127422 systemd[2278]: Created slice app.slice - User Application Slice. Jan 29 12:04:22.127457 systemd[2278]: Reached target paths.target - Paths. Jan 29 12:04:22.127476 systemd[2278]: Reached target timers.target - Timers. Jan 29 12:04:22.138962 systemd[2278]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:04:22.147724 systemd[2278]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:04:22.147790 systemd[2278]: Reached target sockets.target - Sockets. Jan 29 12:04:22.147826 systemd[2278]: Reached target basic.target - Basic System. Jan 29 12:04:22.147905 systemd[2278]: Reached target default.target - Main User Target. Jan 29 12:04:22.147945 systemd[2278]: Startup finished in 143ms. Jan 29 12:04:22.148068 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:04:22.153552 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:04:22.310212 systemd[1]: Started sshd@1-172.31.19.14:22-139.178.68.195:34682.service - OpenSSH per-connection server daemon (139.178.68.195:34682). Jan 29 12:04:22.500217 sshd[2290]: Accepted publickey for core from 139.178.68.195 port 34682 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:22.502000 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:22.506633 systemd-logind[1987]: New session 2 of user core. Jan 29 12:04:22.516750 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:04:22.640449 sshd[2290]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:22.645811 systemd[1]: sshd@1-172.31.19.14:22-139.178.68.195:34682.service: Deactivated successfully. Jan 29 12:04:22.652894 systemd-logind[1987]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:04:22.654649 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:04:22.655987 systemd-logind[1987]: Removed session 2. Jan 29 12:04:22.667209 systemd[1]: Started sshd@2-172.31.19.14:22-139.178.68.195:34684.service - OpenSSH per-connection server daemon (139.178.68.195:34684). Jan 29 12:04:22.839104 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 34684 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:22.840904 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:22.847068 systemd-logind[1987]: New session 3 of user core. Jan 29 12:04:22.853199 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:04:22.987096 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:22.992732 systemd[1]: sshd@2-172.31.19.14:22-139.178.68.195:34684.service: Deactivated successfully. Jan 29 12:04:23.000205 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:04:23.000990 systemd-logind[1987]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:04:23.002260 systemd-logind[1987]: Removed session 3. Jan 29 12:04:23.018298 systemd[1]: Started sshd@3-172.31.19.14:22-139.178.68.195:34694.service - OpenSSH per-connection server daemon (139.178.68.195:34694). Jan 29 12:04:23.186408 sshd[2306]: Accepted publickey for core from 139.178.68.195 port 34694 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:23.188374 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:23.193865 systemd-logind[1987]: New session 4 of user core. Jan 29 12:04:23.199133 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:04:23.322506 sshd[2306]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:23.327129 systemd[1]: sshd@3-172.31.19.14:22-139.178.68.195:34694.service: Deactivated successfully. Jan 29 12:04:23.331758 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:04:23.332532 systemd-logind[1987]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:04:23.333790 systemd-logind[1987]: Removed session 4. Jan 29 12:04:23.351192 systemd[1]: Started sshd@4-172.31.19.14:22-139.178.68.195:34702.service - OpenSSH per-connection server daemon (139.178.68.195:34702). Jan 29 12:04:23.511381 sshd[2314]: Accepted publickey for core from 139.178.68.195 port 34702 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:23.512928 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:23.519934 systemd-logind[1987]: New session 5 of user core. Jan 29 12:04:23.526290 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:04:23.666538 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:04:23.667219 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:23.690190 sudo[2318]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:23.714158 sshd[2314]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:23.721413 systemd[1]: sshd@4-172.31.19.14:22-139.178.68.195:34702.service: Deactivated successfully. Jan 29 12:04:23.727148 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:04:23.728426 systemd-logind[1987]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:04:23.730562 systemd-logind[1987]: Removed session 5. Jan 29 12:04:23.745414 systemd[1]: Started sshd@5-172.31.19.14:22-139.178.68.195:34708.service - OpenSSH per-connection server daemon (139.178.68.195:34708). Jan 29 12:04:23.927599 sshd[2323]: Accepted publickey for core from 139.178.68.195 port 34708 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:23.929164 sshd[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:23.943755 systemd-logind[1987]: New session 6 of user core. Jan 29 12:04:23.950535 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:04:24.058897 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:04:24.059607 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:24.065159 sudo[2328]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:24.076165 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:04:24.077001 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:24.098648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:04:24.118966 auditctl[2331]: No rules Jan 29 12:04:24.119861 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:04:24.120594 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:04:24.133192 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:04:24.183674 augenrules[2350]: No rules Jan 29 12:04:24.188180 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:04:24.197880 sudo[2327]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:24.222792 sshd[2323]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:24.228047 systemd[1]: sshd@5-172.31.19.14:22-139.178.68.195:34708.service: Deactivated successfully. Jan 29 12:04:24.233452 systemd-logind[1987]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:04:24.234765 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:04:24.238300 systemd-logind[1987]: Removed session 6. Jan 29 12:04:24.252172 systemd[1]: Started sshd@6-172.31.19.14:22-139.178.68.195:34712.service - OpenSSH per-connection server daemon (139.178.68.195:34712). Jan 29 12:04:24.420881 sshd[2359]: Accepted publickey for core from 139.178.68.195 port 34712 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:24.422689 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:24.430076 systemd-logind[1987]: New session 7 of user core. Jan 29 12:04:24.434285 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:04:24.536091 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:04:24.536487 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:25.223356 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:04:25.226415 (dockerd)[2379]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:04:25.825484 dockerd[2379]: time="2025-01-29T12:04:25.825373983Z" level=info msg="Starting up" Jan 29 12:04:26.806025 systemd[1]: var-lib-docker-metacopy\x2dcheck1806544983-merged.mount: Deactivated successfully. Jan 29 12:04:26.826591 dockerd[2379]: time="2025-01-29T12:04:26.826537852Z" level=info msg="Loading containers: start." Jan 29 12:04:27.034852 kernel: Initializing XFRM netlink socket Jan 29 12:04:27.091389 (udev-worker)[2400]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:27.264595 systemd-networkd[1570]: docker0: Link UP Jan 29 12:04:27.293433 dockerd[2379]: time="2025-01-29T12:04:27.293380980Z" level=info msg="Loading containers: done." Jan 29 12:04:27.361614 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1391323433-merged.mount: Deactivated successfully. Jan 29 12:04:27.391337 dockerd[2379]: time="2025-01-29T12:04:27.391276885Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:04:27.391538 dockerd[2379]: time="2025-01-29T12:04:27.391408447Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:04:27.391606 dockerd[2379]: time="2025-01-29T12:04:27.391560716Z" level=info msg="Daemon has completed initialization" Jan 29 12:04:27.456191 dockerd[2379]: time="2025-01-29T12:04:27.455283159Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:04:27.455440 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:04:28.070055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:28.078086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:28.954190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:28.961759 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:29.063266 kubelet[2529]: E0129 12:04:29.063212 2529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:29.076330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:29.076617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:29.248715 containerd[2010]: time="2025-01-29T12:04:29.248411261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:04:30.033449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309149824.mount: Deactivated successfully. Jan 29 12:04:33.199589 containerd[2010]: time="2025-01-29T12:04:33.199522746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.202338 containerd[2010]: time="2025-01-29T12:04:33.202276039Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 12:04:33.206923 containerd[2010]: time="2025-01-29T12:04:33.206771191Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.211193 containerd[2010]: time="2025-01-29T12:04:33.211057969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.212913 containerd[2010]: time="2025-01-29T12:04:33.212688178Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.964062264s" Jan 29 12:04:33.212913 containerd[2010]: time="2025-01-29T12:04:33.212742753Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:04:33.250131 containerd[2010]: time="2025-01-29T12:04:33.250086025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:04:35.609320 containerd[2010]: time="2025-01-29T12:04:35.609265170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.611397 containerd[2010]: time="2025-01-29T12:04:35.611169202Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 12:04:35.615328 containerd[2010]: time="2025-01-29T12:04:35.613585930Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.618177 containerd[2010]: time="2025-01-29T12:04:35.618136873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:35.623678 containerd[2010]: time="2025-01-29T12:04:35.623586584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.373456845s" Jan 29 12:04:35.624070 containerd[2010]: time="2025-01-29T12:04:35.624035983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:04:35.665450 containerd[2010]: time="2025-01-29T12:04:35.665412638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:04:37.569484 containerd[2010]: time="2025-01-29T12:04:37.569379698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.571529 containerd[2010]: time="2025-01-29T12:04:37.571467465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 12:04:37.573875 containerd[2010]: time="2025-01-29T12:04:37.573774789Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.580356 containerd[2010]: time="2025-01-29T12:04:37.579742318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.581637 containerd[2010]: time="2025-01-29T12:04:37.581596348Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.916141769s" Jan 29 12:04:37.581722 containerd[2010]: time="2025-01-29T12:04:37.581644135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:04:37.608345 containerd[2010]: time="2025-01-29T12:04:37.608311034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:04:38.931109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579397154.mount: Deactivated successfully. Jan 29 12:04:39.326830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:04:39.334493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:39.952868 containerd[2010]: time="2025-01-29T12:04:39.952504293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:39.958061 containerd[2010]: time="2025-01-29T12:04:39.958008792Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 12:04:39.960346 containerd[2010]: time="2025-01-29T12:04:39.960304469Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:39.961096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:39.965478 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:39.971171 containerd[2010]: time="2025-01-29T12:04:39.970632478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:39.973428 containerd[2010]: time="2025-01-29T12:04:39.973387662Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.365018568s" Jan 29 12:04:39.973870 containerd[2010]: time="2025-01-29T12:04:39.973743580Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:04:40.050892 containerd[2010]: time="2025-01-29T12:04:40.050634081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:04:40.075964 kubelet[2639]: E0129 12:04:40.075876 2639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:40.080009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:40.080322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:40.669659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089601759.mount: Deactivated successfully. Jan 29 12:04:42.289171 containerd[2010]: time="2025-01-29T12:04:42.289109637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.290918 containerd[2010]: time="2025-01-29T12:04:42.290848037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 12:04:42.292632 containerd[2010]: time="2025-01-29T12:04:42.292592035Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.298693 containerd[2010]: time="2025-01-29T12:04:42.298253123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.299602 containerd[2010]: time="2025-01-29T12:04:42.299556837Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.248873883s" Jan 29 12:04:42.299706 containerd[2010]: time="2025-01-29T12:04:42.299612066Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:04:42.334748 containerd[2010]: time="2025-01-29T12:04:42.334262546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:04:42.895589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201593410.mount: Deactivated successfully. Jan 29 12:04:42.908991 containerd[2010]: time="2025-01-29T12:04:42.908887215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.910352 containerd[2010]: time="2025-01-29T12:04:42.910284300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 12:04:42.912786 containerd[2010]: time="2025-01-29T12:04:42.912724324Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.918499 containerd[2010]: time="2025-01-29T12:04:42.916960780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:42.918499 containerd[2010]: time="2025-01-29T12:04:42.918235816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 583.923722ms" Jan 29 12:04:42.918499 containerd[2010]: time="2025-01-29T12:04:42.918275694Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:04:42.949509 containerd[2010]: time="2025-01-29T12:04:42.949465980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:04:43.689979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578738946.mount: Deactivated successfully. Jan 29 12:04:45.379177 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:04:47.619527 containerd[2010]: time="2025-01-29T12:04:47.619469316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.621211 containerd[2010]: time="2025-01-29T12:04:47.620998239Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 12:04:47.623063 containerd[2010]: time="2025-01-29T12:04:47.623024112Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.627140 containerd[2010]: time="2025-01-29T12:04:47.626767364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.627974 containerd[2010]: time="2025-01-29T12:04:47.627936832Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.678433794s" Jan 29 12:04:47.628061 containerd[2010]: time="2025-01-29T12:04:47.627981415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:04:50.208015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:04:50.220480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:50.873059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:50.887361 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:50.982787 kubelet[2833]: E0129 12:04:50.982736 2833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:50.990678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:50.990997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:51.340768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:51.353185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:51.395175 systemd[1]: Reloading requested from client PID 2849 ('systemctl') (unit session-7.scope)... Jan 29 12:04:51.395198 systemd[1]: Reloading... Jan 29 12:04:51.594833 zram_generator::config[2892]: No configuration found. Jan 29 12:04:51.820942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:51.984470 systemd[1]: Reloading finished in 583 ms. Jan 29 12:04:52.098057 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:04:52.098857 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:04:52.099494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:52.111615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:52.702367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:52.717636 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:52.854282 kubelet[2961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:52.857828 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:52.857828 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:52.859903 kubelet[2961]: I0129 12:04:52.859604 2961 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:53.449677 kubelet[2961]: I0129 12:04:53.449629 2961 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:04:53.449677 kubelet[2961]: I0129 12:04:53.449665 2961 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:53.450370 kubelet[2961]: I0129 12:04:53.450337 2961 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:04:53.521623 kubelet[2961]: I0129 12:04:53.521576 2961 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:53.532925 kubelet[2961]: E0129 12:04:53.531523 2961 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.584829 kubelet[2961]: I0129 12:04:53.583928 2961 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:53.595738 kubelet[2961]: I0129 12:04:53.595532 2961 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:53.598758 kubelet[2961]: I0129 12:04:53.595733 2961 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:04:53.599128 kubelet[2961]: I0129 12:04:53.598774 2961 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:53.599128 kubelet[2961]: I0129 12:04:53.598845 2961 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:04:53.599128 kubelet[2961]: I0129 12:04:53.599086 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:53.604952 kubelet[2961]: I0129 12:04:53.603887 2961 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:04:53.604952 kubelet[2961]: I0129 12:04:53.603925 2961 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:53.604952 kubelet[2961]: I0129 12:04:53.603956 2961 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:04:53.604952 kubelet[2961]: I0129 12:04:53.603974 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:53.604952 kubelet[2961]: W0129 12:04:53.603968 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-14&limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.604952 kubelet[2961]: E0129 12:04:53.604031 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-14&limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.611170 kubelet[2961]: W0129 12:04:53.610959 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.611170 kubelet[2961]: E0129 12:04:53.611036 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.611392 kubelet[2961]: I0129 12:04:53.611286 2961 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:53.615825 kubelet[2961]: I0129 12:04:53.614178 2961 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:53.615825 kubelet[2961]: W0129 12:04:53.614406 2961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:04:53.619582 kubelet[2961]: I0129 12:04:53.619538 2961 server.go:1264] "Started kubelet" Jan 29 12:04:53.641426 kubelet[2961]: I0129 12:04:53.641379 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:53.646329 kubelet[2961]: I0129 12:04:53.646269 2961 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:53.649520 kubelet[2961]: I0129 12:04:53.647656 2961 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:04:53.649520 kubelet[2961]: E0129 12:04:53.648249 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.14:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-14.181f284d4ab8b038 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-14,UID:ip-172-31-19-14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-14,},FirstTimestamp:2025-01-29 12:04:53.619511352 +0000 UTC m=+0.881543331,LastTimestamp:2025-01-29 12:04:53.619511352 +0000 UTC m=+0.881543331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-14,}" Jan 29 12:04:53.649520 kubelet[2961]: I0129 12:04:53.648372 2961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:53.649520 kubelet[2961]: I0129 12:04:53.648698 2961 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:53.654598 kubelet[2961]: I0129 12:04:53.650584 2961 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:04:53.662722 kubelet[2961]: I0129 12:04:53.662692 2961 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:04:53.662950 kubelet[2961]: I0129 12:04:53.662818 2961 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:53.663347 kubelet[2961]: E0129 12:04:53.663311 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-14?timeout=10s\": dial tcp 172.31.19.14:6443: connect: connection refused" interval="200ms" Jan 29 12:04:53.663612 kubelet[2961]: I0129 12:04:53.663587 2961 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:53.663697 kubelet[2961]: I0129 12:04:53.663674 2961 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:53.666817 kubelet[2961]: W0129 12:04:53.665996 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.666817 kubelet[2961]: E0129 12:04:53.666056 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.666817 kubelet[2961]: I0129 12:04:53.666305 2961 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:53.683547 kubelet[2961]: I0129 12:04:53.683494 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:53.685496 kubelet[2961]: I0129 12:04:53.685114 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:53.685496 kubelet[2961]: I0129 12:04:53.685150 2961 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:53.685496 kubelet[2961]: I0129 12:04:53.685173 2961 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:04:53.685496 kubelet[2961]: E0129 12:04:53.685225 2961 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:53.703624 kubelet[2961]: W0129 12:04:53.703496 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.703624 kubelet[2961]: E0129 12:04:53.703565 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:53.721090 kubelet[2961]: I0129 12:04:53.719489 2961 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:53.721090 kubelet[2961]: I0129 12:04:53.719508 2961 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:53.721090 kubelet[2961]: I0129 12:04:53.719547 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:53.729108 kubelet[2961]: I0129 12:04:53.728953 2961 policy_none.go:49] "None policy: Start" Jan 29 12:04:53.730331 kubelet[2961]: I0129 12:04:53.729913 2961 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:53.730331 kubelet[2961]: I0129 12:04:53.729937 2961 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:53.755168 kubelet[2961]: I0129 12:04:53.754398 2961 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:53.755168 kubelet[2961]: I0129 12:04:53.754620 2961 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:53.755168 kubelet[2961]: I0129 12:04:53.754739 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:53.757817 kubelet[2961]: I0129 12:04:53.757775 2961 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:04:53.759182 kubelet[2961]: E0129 12:04:53.759153 2961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.14:6443/api/v1/nodes\": dial tcp 172.31.19.14:6443: connect: connection refused" node="ip-172-31-19-14" Jan 29 12:04:53.759430 kubelet[2961]: E0129 12:04:53.759413 2961 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-14\" not found" Jan 29 12:04:53.785980 kubelet[2961]: I0129 12:04:53.785929 2961 topology_manager.go:215] "Topology Admit Handler" podUID="af24ae53861aa01b259c0e76e6e0aa99" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-14" Jan 29 12:04:53.788067 kubelet[2961]: I0129 12:04:53.787829 2961 topology_manager.go:215] "Topology Admit Handler" podUID="7d85ddbbe414b8251fae1c2d00224b4d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.789172 kubelet[2961]: I0129 12:04:53.789151 2961 topology_manager.go:215] "Topology Admit Handler" podUID="a9355bfed64ad5138c5fc0d8eb949189" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-14" Jan 29 12:04:53.863876 kubelet[2961]: I0129 12:04:53.863835 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.864404 kubelet[2961]: I0129 12:04:53.863887 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.864404 kubelet[2961]: I0129 12:04:53.863916 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-ca-certs\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:04:53.864404 kubelet[2961]: I0129 12:04:53.863937 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:04:53.864404 kubelet[2961]: I0129 12:04:53.863958 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.864404 kubelet[2961]: I0129 12:04:53.863982 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.864548 kubelet[2961]: I0129 12:04:53.864007 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:04:53.864548 kubelet[2961]: I0129 12:04:53.864034 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:04:53.864548 kubelet[2961]: I0129 12:04:53.864056 2961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9355bfed64ad5138c5fc0d8eb949189-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-14\" (UID: \"a9355bfed64ad5138c5fc0d8eb949189\") " pod="kube-system/kube-scheduler-ip-172-31-19-14" Jan 29 12:04:53.864548 kubelet[2961]: E0129 12:04:53.863844 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-14?timeout=10s\": dial tcp 172.31.19.14:6443: connect: connection refused" interval="400ms" Jan 29 12:04:53.962548 kubelet[2961]: I0129 12:04:53.962518 2961 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:04:53.962901 kubelet[2961]: E0129 12:04:53.962866 2961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.14:6443/api/v1/nodes\": dial tcp 172.31.19.14:6443: connect: connection refused" node="ip-172-31-19-14" Jan 29 12:04:54.097458 containerd[2010]: time="2025-01-29T12:04:54.097148902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-14,Uid:af24ae53861aa01b259c0e76e6e0aa99,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:54.101257 containerd[2010]: time="2025-01-29T12:04:54.100931702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-14,Uid:7d85ddbbe414b8251fae1c2d00224b4d,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:54.101257 containerd[2010]: time="2025-01-29T12:04:54.101106494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-14,Uid:a9355bfed64ad5138c5fc0d8eb949189,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:54.264826 kubelet[2961]: E0129 12:04:54.264685 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-14?timeout=10s\": dial tcp 172.31.19.14:6443: connect: connection refused" interval="800ms" Jan 29 12:04:54.365839 kubelet[2961]: I0129 12:04:54.365432 2961 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:04:54.365839 kubelet[2961]: E0129 12:04:54.365828 2961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.14:6443/api/v1/nodes\": dial tcp 172.31.19.14:6443: connect: connection refused" node="ip-172-31-19-14" Jan 29 12:04:54.550758 kubelet[2961]: W0129 12:04:54.550619 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:54.550758 kubelet[2961]: E0129 12:04:54.550684 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:54.658084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504847355.mount: Deactivated successfully. Jan 29 12:04:54.674355 containerd[2010]: time="2025-01-29T12:04:54.674301004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:54.676074 containerd[2010]: time="2025-01-29T12:04:54.676025254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:04:54.677872 containerd[2010]: time="2025-01-29T12:04:54.677837487Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:54.679878 containerd[2010]: time="2025-01-29T12:04:54.679359423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:54.680945 containerd[2010]: time="2025-01-29T12:04:54.680891673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:54.683203 containerd[2010]: time="2025-01-29T12:04:54.683164621Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:54.684444 containerd[2010]: time="2025-01-29T12:04:54.684390126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:54.689471 containerd[2010]: time="2025-01-29T12:04:54.689404835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:54.690378 containerd[2010]: time="2025-01-29T12:04:54.690342183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.31841ms" Jan 29 12:04:54.692530 containerd[2010]: time="2025-01-29T12:04:54.692462272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.215878ms" Jan 29 12:04:54.695220 containerd[2010]: time="2025-01-29T12:04:54.695185753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.028669ms" Jan 29 12:04:54.963715 kubelet[2961]: W0129 12:04:54.963614 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:54.964956 kubelet[2961]: E0129 12:04:54.963732 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:54.967529 containerd[2010]: time="2025-01-29T12:04:54.961681444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:54.967529 containerd[2010]: time="2025-01-29T12:04:54.961854607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:54.967529 containerd[2010]: time="2025-01-29T12:04:54.961869591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.967858 containerd[2010]: time="2025-01-29T12:04:54.967731055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.974827 containerd[2010]: time="2025-01-29T12:04:54.973478307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:54.974827 containerd[2010]: time="2025-01-29T12:04:54.973691665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:54.974827 containerd[2010]: time="2025-01-29T12:04:54.973830009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.974827 containerd[2010]: time="2025-01-29T12:04:54.974005323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.981537 containerd[2010]: time="2025-01-29T12:04:54.981242061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:54.981706 containerd[2010]: time="2025-01-29T12:04:54.981566892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:54.981706 containerd[2010]: time="2025-01-29T12:04:54.981612915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.981908 containerd[2010]: time="2025-01-29T12:04:54.981771192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:54.982586 kubelet[2961]: W0129 12:04:54.982520 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:54.982682 kubelet[2961]: E0129 12:04:54.982601 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:55.067881 kubelet[2961]: E0129 12:04:55.067823 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-14?timeout=10s\": dial tcp 172.31.19.14:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:55.170489 kubelet[2961]: I0129 12:04:55.170461 2961 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:04:55.171425 kubelet[2961]: E0129 12:04:55.171380 2961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.14:6443/api/v1/nodes\": dial tcp 172.31.19.14:6443: connect: connection refused" node="ip-172-31-19-14" Jan 29 12:04:55.174909 kubelet[2961]: W0129 12:04:55.174645 2961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-14&limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:55.175102 kubelet[2961]: E0129 12:04:55.174918 2961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-14&limit=500&resourceVersion=0": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:55.218891 containerd[2010]: time="2025-01-29T12:04:55.218016858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-14,Uid:af24ae53861aa01b259c0e76e6e0aa99,Namespace:kube-system,Attempt:0,} returns sandbox id \"fac02f0258d10084b6daa928d6ee0fb4fe382dbba80c3313c7d9f3ec364cbdc6\"" Jan 29 12:04:55.236332 containerd[2010]: time="2025-01-29T12:04:55.236180111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-14,Uid:7d85ddbbe414b8251fae1c2d00224b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"39cdbef9149fc44b12b1351a378ad6f4ffbcbf8769811d9c307108dd7fcd6a9e\"" Jan 29 12:04:55.236332 containerd[2010]: time="2025-01-29T12:04:55.236235380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-14,Uid:a9355bfed64ad5138c5fc0d8eb949189,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4317d3cb6d3eb29c11dbd3eb157b17d85adfb3474bbacaea332166f6308362e\"" Jan 29 12:04:55.236332 containerd[2010]: time="2025-01-29T12:04:55.236182884Z" level=info msg="CreateContainer within sandbox \"fac02f0258d10084b6daa928d6ee0fb4fe382dbba80c3313c7d9f3ec364cbdc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:04:55.242034 containerd[2010]: time="2025-01-29T12:04:55.241606464Z" level=info msg="CreateContainer within sandbox \"f4317d3cb6d3eb29c11dbd3eb157b17d85adfb3474bbacaea332166f6308362e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:04:55.243478 containerd[2010]: time="2025-01-29T12:04:55.243335683Z" level=info msg="CreateContainer within sandbox \"39cdbef9149fc44b12b1351a378ad6f4ffbcbf8769811d9c307108dd7fcd6a9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:04:55.303909 containerd[2010]: time="2025-01-29T12:04:55.303857860Z" level=info msg="CreateContainer within sandbox \"fac02f0258d10084b6daa928d6ee0fb4fe382dbba80c3313c7d9f3ec364cbdc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e750b8f8fb1dbf141e9021fb01330981eb41514e48b073e95fcfc0dbcfa51cf\"" Jan 29 12:04:55.305106 containerd[2010]: time="2025-01-29T12:04:55.305067868Z" level=info msg="StartContainer for \"9e750b8f8fb1dbf141e9021fb01330981eb41514e48b073e95fcfc0dbcfa51cf\"" Jan 29 12:04:55.314414 kubelet[2961]: E0129 12:04:55.314206 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.14:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-14.181f284d4ab8b038 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-14,UID:ip-172-31-19-14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-14,},FirstTimestamp:2025-01-29 12:04:53.619511352 +0000 UTC m=+0.881543331,LastTimestamp:2025-01-29 12:04:53.619511352 +0000 UTC m=+0.881543331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-14,}" Jan 29 12:04:55.317318 containerd[2010]: time="2025-01-29T12:04:55.316680941Z" level=info msg="CreateContainer within sandbox \"f4317d3cb6d3eb29c11dbd3eb157b17d85adfb3474bbacaea332166f6308362e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e245fc3ac90444fd32bdc737272228b8483fbc176ddfab4fb83e4f9bee4cfdb\"" Jan 29 12:04:55.319861 containerd[2010]: time="2025-01-29T12:04:55.317772023Z" level=info msg="StartContainer for \"0e245fc3ac90444fd32bdc737272228b8483fbc176ddfab4fb83e4f9bee4cfdb\"" Jan 29 12:04:55.320461 containerd[2010]: time="2025-01-29T12:04:55.320428777Z" level=info msg="CreateContainer within sandbox \"39cdbef9149fc44b12b1351a378ad6f4ffbcbf8769811d9c307108dd7fcd6a9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42a223d4cfa34dc5d33a5cda1c1ea9bca21e681bf5f9dd0f69aafe2db8b2dbae\"" Jan 29 12:04:55.321579 containerd[2010]: time="2025-01-29T12:04:55.321512244Z" level=info msg="StartContainer for \"42a223d4cfa34dc5d33a5cda1c1ea9bca21e681bf5f9dd0f69aafe2db8b2dbae\"" Jan 29 12:04:55.544901 containerd[2010]: time="2025-01-29T12:04:55.544770291Z" level=info msg="StartContainer for \"9e750b8f8fb1dbf141e9021fb01330981eb41514e48b073e95fcfc0dbcfa51cf\" returns successfully" Jan 29 12:04:55.620506 containerd[2010]: time="2025-01-29T12:04:55.620459277Z" level=info msg="StartContainer for \"42a223d4cfa34dc5d33a5cda1c1ea9bca21e681bf5f9dd0f69aafe2db8b2dbae\" returns successfully" Jan 29 12:04:55.639247 containerd[2010]: time="2025-01-29T12:04:55.639120444Z" level=info msg="StartContainer for \"0e245fc3ac90444fd32bdc737272228b8483fbc176ddfab4fb83e4f9bee4cfdb\" returns successfully" Jan 29 12:04:55.659013 kubelet[2961]: E0129 12:04:55.658979 2961 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.14:6443: connect: connection refused Jan 29 12:04:56.780305 kubelet[2961]: I0129 12:04:56.778235 2961 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:04:58.754814 kubelet[2961]: E0129 12:04:58.754755 2961 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-14\" not found" node="ip-172-31-19-14" Jan 29 12:04:58.866864 kubelet[2961]: I0129 12:04:58.866656 2961 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-14" Jan 29 12:04:59.232929 update_engine[1991]: I20250129 12:04:59.232847 1991 update_attempter.cc:509] Updating boot flags... Jan 29 12:04:59.317864 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3246) Jan 29 12:04:59.575820 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3245) Jan 29 12:04:59.616502 kubelet[2961]: I0129 12:04:59.616394 2961 apiserver.go:52] "Watching apiserver" Jan 29 12:04:59.663924 kubelet[2961]: I0129 12:04:59.663186 2961 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:04:59.995126 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3245) Jan 29 12:05:01.728539 systemd[1]: Reloading requested from client PID 3500 ('systemctl') (unit session-7.scope)... Jan 29 12:05:01.728560 systemd[1]: Reloading... Jan 29 12:05:02.058527 zram_generator::config[3536]: No configuration found. Jan 29 12:05:02.304488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:05:02.527997 systemd[1]: Reloading finished in 798 ms. Jan 29 12:05:02.663520 kubelet[2961]: I0129 12:05:02.663154 2961 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:05:02.663539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:02.693809 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:05:02.694274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:02.713178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:03.527347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:03.550708 (kubelet)[3607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:05:03.660039 kubelet[3607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:05:03.660039 kubelet[3607]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:05:03.660039 kubelet[3607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:05:03.661044 kubelet[3607]: I0129 12:05:03.660557 3607 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:05:03.669722 kubelet[3607]: I0129 12:05:03.669384 3607 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:05:03.669722 kubelet[3607]: I0129 12:05:03.669415 3607 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:05:03.671040 kubelet[3607]: I0129 12:05:03.670291 3607 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:05:03.672669 kubelet[3607]: I0129 12:05:03.672207 3607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:05:03.674903 kubelet[3607]: I0129 12:05:03.674044 3607 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:05:03.689275 kubelet[3607]: I0129 12:05:03.688786 3607 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:05:03.690382 kubelet[3607]: I0129 12:05:03.690333 3607 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:05:03.690619 kubelet[3607]: I0129 12:05:03.690383 3607 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:05:03.690769 kubelet[3607]: I0129 12:05:03.690641 3607 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:05:03.690769 kubelet[3607]: I0129 12:05:03.690658 3607 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:05:03.690769 kubelet[3607]: I0129 12:05:03.690709 3607 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:05:03.690972 kubelet[3607]: I0129 12:05:03.690827 3607 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:05:03.690972 kubelet[3607]: I0129 12:05:03.690842 3607 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:05:03.691372 kubelet[3607]: I0129 12:05:03.691253 3607 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:05:03.691372 kubelet[3607]: I0129 12:05:03.691284 3607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:05:03.698475 kubelet[3607]: I0129 12:05:03.698443 3607 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:05:03.701984 kubelet[3607]: I0129 12:05:03.701652 3607 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:05:03.704609 kubelet[3607]: I0129 12:05:03.704574 3607 server.go:1264] "Started kubelet" Jan 29 12:05:03.717864 kubelet[3607]: I0129 12:05:03.711867 3607 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:05:03.717864 kubelet[3607]: I0129 12:05:03.713273 3607 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:05:03.717864 kubelet[3607]: I0129 12:05:03.713944 3607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:05:03.717864 kubelet[3607]: I0129 12:05:03.714251 3607 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:05:03.717864 kubelet[3607]: I0129 12:05:03.716247 3607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:05:03.733025 kubelet[3607]: I0129 12:05:03.731547 3607 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:05:03.733025 kubelet[3607]: I0129 12:05:03.731949 3607 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:05:03.733025 kubelet[3607]: I0129 12:05:03.732102 3607 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:05:03.735201 kubelet[3607]: I0129 12:05:03.734581 3607 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:05:03.735201 kubelet[3607]: I0129 12:05:03.734713 3607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:05:03.744314 kubelet[3607]: I0129 12:05:03.744288 3607 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:05:03.749183 kubelet[3607]: E0129 12:05:03.749153 3607 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:05:03.758365 kubelet[3607]: I0129 12:05:03.758318 3607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:05:03.764282 kubelet[3607]: I0129 12:05:03.764243 3607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:05:03.767927 kubelet[3607]: I0129 12:05:03.767903 3607 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:05:03.769060 kubelet[3607]: I0129 12:05:03.768184 3607 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:05:03.769286 kubelet[3607]: E0129 12:05:03.769251 3607 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:05:03.836779 kubelet[3607]: I0129 12:05:03.836661 3607 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-14" Jan 29 12:05:03.852130 kubelet[3607]: I0129 12:05:03.852035 3607 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-14" Jan 29 12:05:03.852130 kubelet[3607]: I0129 12:05:03.852136 3607 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-14" Jan 29 12:05:03.872245 kubelet[3607]: E0129 12:05:03.870583 3607 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885210 3607 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885230 3607 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885253 3607 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885438 3607 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885451 3607 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:05:03.885712 kubelet[3607]: I0129 12:05:03.885475 3607 policy_none.go:49] "None policy: Start" Jan 29 12:05:03.887589 kubelet[3607]: I0129 12:05:03.886769 3607 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:05:03.887815 kubelet[3607]: I0129 12:05:03.887689 3607 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:05:03.887981 kubelet[3607]: I0129 12:05:03.887960 3607 state_mem.go:75] "Updated machine memory state" Jan 29 12:05:03.889700 kubelet[3607]: I0129 12:05:03.889607 3607 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:05:03.893703 kubelet[3607]: I0129 12:05:03.893254 3607 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:05:03.893885 kubelet[3607]: I0129 12:05:03.893734 3607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:05:04.072822 kubelet[3607]: I0129 12:05:04.072747 3607 topology_manager.go:215] "Topology Admit Handler" podUID="af24ae53861aa01b259c0e76e6e0aa99" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.072994 kubelet[3607]: I0129 12:05:04.072902 3607 topology_manager.go:215] "Topology Admit Handler" podUID="7d85ddbbe414b8251fae1c2d00224b4d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.072994 kubelet[3607]: I0129 12:05:04.072981 3607 topology_manager.go:215] "Topology Admit Handler" podUID="a9355bfed64ad5138c5fc0d8eb949189" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-14" Jan 29 12:05:04.086723 kubelet[3607]: E0129 12:05:04.084848 3607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-14\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.086723 kubelet[3607]: E0129 12:05:04.086046 3607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-14\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.135406 kubelet[3607]: I0129 12:05:04.135134 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.135406 kubelet[3607]: I0129 12:05:04.135181 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.135406 kubelet[3607]: I0129 12:05:04.135208 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.135406 kubelet[3607]: I0129 12:05:04.135234 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.135406 kubelet[3607]: I0129 12:05:04.135279 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.137285 kubelet[3607]: I0129 12:05:04.135304 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-ca-certs\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.137285 kubelet[3607]: I0129 12:05:04.135326 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af24ae53861aa01b259c0e76e6e0aa99-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-14\" (UID: \"af24ae53861aa01b259c0e76e6e0aa99\") " pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.137285 kubelet[3607]: I0129 12:05:04.135350 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d85ddbbe414b8251fae1c2d00224b4d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-14\" (UID: \"7d85ddbbe414b8251fae1c2d00224b4d\") " pod="kube-system/kube-controller-manager-ip-172-31-19-14" Jan 29 12:05:04.137285 kubelet[3607]: I0129 12:05:04.135370 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9355bfed64ad5138c5fc0d8eb949189-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-14\" (UID: \"a9355bfed64ad5138c5fc0d8eb949189\") " pod="kube-system/kube-scheduler-ip-172-31-19-14" Jan 29 12:05:04.712365 kubelet[3607]: I0129 12:05:04.712184 3607 apiserver.go:52] "Watching apiserver" Jan 29 12:05:04.732949 kubelet[3607]: I0129 12:05:04.732904 3607 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:05:04.795507 kubelet[3607]: E0129 12:05:04.795473 3607 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-14\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-14" Jan 29 12:05:04.894258 kubelet[3607]: I0129 12:05:04.894172 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-14" podStartSLOduration=0.894149999 podStartE2EDuration="894.149999ms" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:04.843720536 +0000 UTC m=+1.276358433" watchObservedRunningTime="2025-01-29 12:05:04.894149999 +0000 UTC m=+1.326787911" Jan 29 12:05:04.930962 kubelet[3607]: I0129 12:05:04.930888 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-14" podStartSLOduration=5.93078969 podStartE2EDuration="5.93078969s" podCreationTimestamp="2025-01-29 12:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:04.896110217 +0000 UTC m=+1.328748111" watchObservedRunningTime="2025-01-29 12:05:04.93078969 +0000 UTC m=+1.363427589" Jan 29 12:05:08.497912 sudo[2363]: pam_unix(sudo:session): session closed for user root Jan 29 12:05:08.533204 sshd[2359]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:08.542811 systemd[1]: sshd@6-172.31.19.14:22-139.178.68.195:34712.service: Deactivated successfully. Jan 29 12:05:08.549716 kubelet[3607]: I0129 12:05:08.548308 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-14" podStartSLOduration=9.548251918 podStartE2EDuration="9.548251918s" podCreationTimestamp="2025-01-29 12:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:04.94524067 +0000 UTC m=+1.377878563" watchObservedRunningTime="2025-01-29 12:05:08.548251918 +0000 UTC m=+4.980889811" Jan 29 12:05:08.555788 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:05:08.555984 systemd-logind[1987]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:05:08.560315 systemd-logind[1987]: Removed session 7. Jan 29 12:05:15.724986 kubelet[3607]: I0129 12:05:15.724869 3607 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:05:15.729656 containerd[2010]: time="2025-01-29T12:05:15.727228164Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:05:15.732848 kubelet[3607]: I0129 12:05:15.729297 3607 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:05:16.639074 kubelet[3607]: I0129 12:05:16.639022 3607 topology_manager.go:215] "Topology Admit Handler" podUID="fc069b9e-6068-420f-9e83-40f43033b837" podNamespace="kube-system" podName="kube-proxy-4lqkp" Jan 29 12:05:16.740824 kubelet[3607]: I0129 12:05:16.739934 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc069b9e-6068-420f-9e83-40f43033b837-lib-modules\") pod \"kube-proxy-4lqkp\" (UID: \"fc069b9e-6068-420f-9e83-40f43033b837\") " pod="kube-system/kube-proxy-4lqkp" Jan 29 12:05:16.740824 kubelet[3607]: I0129 12:05:16.739987 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvmk\" (UniqueName: \"kubernetes.io/projected/fc069b9e-6068-420f-9e83-40f43033b837-kube-api-access-8pvmk\") pod \"kube-proxy-4lqkp\" (UID: \"fc069b9e-6068-420f-9e83-40f43033b837\") " pod="kube-system/kube-proxy-4lqkp" Jan 29 12:05:16.740824 kubelet[3607]: I0129 12:05:16.740017 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc069b9e-6068-420f-9e83-40f43033b837-kube-proxy\") pod \"kube-proxy-4lqkp\" (UID: \"fc069b9e-6068-420f-9e83-40f43033b837\") " pod="kube-system/kube-proxy-4lqkp" Jan 29 12:05:16.740824 kubelet[3607]: I0129 12:05:16.740039 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc069b9e-6068-420f-9e83-40f43033b837-xtables-lock\") pod \"kube-proxy-4lqkp\" (UID: \"fc069b9e-6068-420f-9e83-40f43033b837\") " pod="kube-system/kube-proxy-4lqkp" Jan 29 12:05:16.868526 kubelet[3607]: I0129 12:05:16.868275 3607 topology_manager.go:215] "Topology Admit Handler" podUID="c955e615-51eb-4ccd-b878-e5a5d1e4d992" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-k5lbg" Jan 29 12:05:16.942220 kubelet[3607]: I0129 12:05:16.942083 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c955e615-51eb-4ccd-b878-e5a5d1e4d992-var-lib-calico\") pod \"tigera-operator-7bc55997bb-k5lbg\" (UID: \"c955e615-51eb-4ccd-b878-e5a5d1e4d992\") " pod="tigera-operator/tigera-operator-7bc55997bb-k5lbg" Jan 29 12:05:16.943083 kubelet[3607]: I0129 12:05:16.942885 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2chl\" (UniqueName: \"kubernetes.io/projected/c955e615-51eb-4ccd-b878-e5a5d1e4d992-kube-api-access-f2chl\") pod \"tigera-operator-7bc55997bb-k5lbg\" (UID: \"c955e615-51eb-4ccd-b878-e5a5d1e4d992\") " pod="tigera-operator/tigera-operator-7bc55997bb-k5lbg" Jan 29 12:05:16.971702 containerd[2010]: time="2025-01-29T12:05:16.971632545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lqkp,Uid:fc069b9e-6068-420f-9e83-40f43033b837,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:17.027065 containerd[2010]: time="2025-01-29T12:05:17.026780132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:17.027065 containerd[2010]: time="2025-01-29T12:05:17.027013695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:17.027263 containerd[2010]: time="2025-01-29T12:05:17.027052783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:17.027785 containerd[2010]: time="2025-01-29T12:05:17.027413596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:17.108302 containerd[2010]: time="2025-01-29T12:05:17.108268564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lqkp,Uid:fc069b9e-6068-420f-9e83-40f43033b837,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e4418287eff5c9a99bb60b5bff32de96e21f99877eba7a1896eebd2de6e4fea\"" Jan 29 12:05:17.112667 containerd[2010]: time="2025-01-29T12:05:17.112423852Z" level=info msg="CreateContainer within sandbox \"9e4418287eff5c9a99bb60b5bff32de96e21f99877eba7a1896eebd2de6e4fea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:05:17.138310 containerd[2010]: time="2025-01-29T12:05:17.138176619Z" level=info msg="CreateContainer within sandbox \"9e4418287eff5c9a99bb60b5bff32de96e21f99877eba7a1896eebd2de6e4fea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9306ded97d8f7ae736e4c3ab7c106192b7d89cbdc945f490602ea5ad77d39d1\"" Jan 29 12:05:17.139298 containerd[2010]: time="2025-01-29T12:05:17.139161588Z" level=info msg="StartContainer for \"b9306ded97d8f7ae736e4c3ab7c106192b7d89cbdc945f490602ea5ad77d39d1\"" Jan 29 12:05:17.196713 containerd[2010]: time="2025-01-29T12:05:17.196170813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-k5lbg,Uid:c955e615-51eb-4ccd-b878-e5a5d1e4d992,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:05:17.246930 containerd[2010]: time="2025-01-29T12:05:17.246579513Z" level=info msg="StartContainer for \"b9306ded97d8f7ae736e4c3ab7c106192b7d89cbdc945f490602ea5ad77d39d1\" returns successfully" Jan 29 12:05:17.290699 containerd[2010]: time="2025-01-29T12:05:17.290571673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:17.290699 containerd[2010]: time="2025-01-29T12:05:17.290646957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:17.290699 containerd[2010]: time="2025-01-29T12:05:17.290664173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:17.291775 containerd[2010]: time="2025-01-29T12:05:17.290791745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:17.371097 containerd[2010]: time="2025-01-29T12:05:17.371042002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-k5lbg,Uid:c955e615-51eb-4ccd-b878-e5a5d1e4d992,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2c7d78a19c1492aeea4e9601d3e4bdd83a8e4595dc21ef44c4b4a39e3558ed7\"" Jan 29 12:05:17.375398 containerd[2010]: time="2025-01-29T12:05:17.375233073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:05:17.883820 kubelet[3607]: I0129 12:05:17.883080 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4lqkp" podStartSLOduration=1.883059114 podStartE2EDuration="1.883059114s" podCreationTimestamp="2025-01-29 12:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:17.882829917 +0000 UTC m=+14.315467813" watchObservedRunningTime="2025-01-29 12:05:17.883059114 +0000 UTC m=+14.315697007" Jan 29 12:05:18.939472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601302998.mount: Deactivated successfully. Jan 29 12:05:21.369477 containerd[2010]: time="2025-01-29T12:05:21.369430226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:21.371475 containerd[2010]: time="2025-01-29T12:05:21.371426855Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:05:21.373115 containerd[2010]: time="2025-01-29T12:05:21.373052046Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:21.378937 containerd[2010]: time="2025-01-29T12:05:21.377970325Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:21.382150 containerd[2010]: time="2025-01-29T12:05:21.381642988Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.006285986s" Jan 29 12:05:21.382150 containerd[2010]: time="2025-01-29T12:05:21.382137412Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:05:21.417342 containerd[2010]: time="2025-01-29T12:05:21.416855141Z" level=info msg="CreateContainer within sandbox \"f2c7d78a19c1492aeea4e9601d3e4bdd83a8e4595dc21ef44c4b4a39e3558ed7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:05:21.465346 containerd[2010]: time="2025-01-29T12:05:21.465294549Z" level=info msg="CreateContainer within sandbox \"f2c7d78a19c1492aeea4e9601d3e4bdd83a8e4595dc21ef44c4b4a39e3558ed7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a8074dc0b87dbe9f89583a060bade3985b8691c721125e03469be7539b0b6fe6\"" Jan 29 12:05:21.473689 containerd[2010]: time="2025-01-29T12:05:21.472384582Z" level=info msg="StartContainer for \"a8074dc0b87dbe9f89583a060bade3985b8691c721125e03469be7539b0b6fe6\"" Jan 29 12:05:21.552075 systemd[1]: run-containerd-runc-k8s.io-a8074dc0b87dbe9f89583a060bade3985b8691c721125e03469be7539b0b6fe6-runc.6VReQM.mount: Deactivated successfully. Jan 29 12:05:21.653563 containerd[2010]: time="2025-01-29T12:05:21.652641968Z" level=info msg="StartContainer for \"a8074dc0b87dbe9f89583a060bade3985b8691c721125e03469be7539b0b6fe6\" returns successfully" Jan 29 12:05:23.818131 kubelet[3607]: I0129 12:05:23.814346 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-k5lbg" podStartSLOduration=3.795918758 podStartE2EDuration="7.814325308s" podCreationTimestamp="2025-01-29 12:05:16 +0000 UTC" firstStartedPulling="2025-01-29 12:05:17.372677533 +0000 UTC m=+13.805315406" lastFinishedPulling="2025-01-29 12:05:21.39108407 +0000 UTC m=+17.823721956" observedRunningTime="2025-01-29 12:05:21.891588154 +0000 UTC m=+18.324226051" watchObservedRunningTime="2025-01-29 12:05:23.814325308 +0000 UTC m=+20.246963193" Jan 29 12:05:25.719189 kubelet[3607]: I0129 12:05:25.718634 3607 topology_manager.go:215] "Topology Admit Handler" podUID="dfa413c7-004d-4a9c-acf7-8a54fa40e796" podNamespace="calico-system" podName="calico-typha-6569989c64-597x9" Jan 29 12:05:25.849937 kubelet[3607]: I0129 12:05:25.849895 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dfa413c7-004d-4a9c-acf7-8a54fa40e796-typha-certs\") pod \"calico-typha-6569989c64-597x9\" (UID: \"dfa413c7-004d-4a9c-acf7-8a54fa40e796\") " pod="calico-system/calico-typha-6569989c64-597x9" Jan 29 12:05:25.850127 kubelet[3607]: I0129 12:05:25.849946 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz9vm\" (UniqueName: \"kubernetes.io/projected/dfa413c7-004d-4a9c-acf7-8a54fa40e796-kube-api-access-qz9vm\") pod \"calico-typha-6569989c64-597x9\" (UID: \"dfa413c7-004d-4a9c-acf7-8a54fa40e796\") " pod="calico-system/calico-typha-6569989c64-597x9" Jan 29 12:05:25.850127 kubelet[3607]: I0129 12:05:25.849984 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfa413c7-004d-4a9c-acf7-8a54fa40e796-tigera-ca-bundle\") pod \"calico-typha-6569989c64-597x9\" (UID: \"dfa413c7-004d-4a9c-acf7-8a54fa40e796\") " pod="calico-system/calico-typha-6569989c64-597x9" Jan 29 12:05:25.891619 kubelet[3607]: I0129 12:05:25.887019 3607 topology_manager.go:215] "Topology Admit Handler" podUID="d596934c-ca0e-497b-8307-03e9f3bb089b" podNamespace="calico-system" podName="calico-node-n7g8f" Jan 29 12:05:25.951347 kubelet[3607]: I0129 12:05:25.951272 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-policysync\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.954900 kubelet[3607]: I0129 12:05:25.952877 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-cni-net-dir\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955156 kubelet[3607]: I0129 12:05:25.955130 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-var-run-calico\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955255 kubelet[3607]: I0129 12:05:25.955242 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-cni-bin-dir\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955343 kubelet[3607]: I0129 12:05:25.955331 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-flexvol-driver-host\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955427 kubelet[3607]: I0129 12:05:25.955416 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d596934c-ca0e-497b-8307-03e9f3bb089b-tigera-ca-bundle\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955509 kubelet[3607]: I0129 12:05:25.955497 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvt7d\" (UniqueName: \"kubernetes.io/projected/d596934c-ca0e-497b-8307-03e9f3bb089b-kube-api-access-rvt7d\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.955934 kubelet[3607]: I0129 12:05:25.955912 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d596934c-ca0e-497b-8307-03e9f3bb089b-node-certs\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.956076 kubelet[3607]: I0129 12:05:25.956059 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-var-lib-calico\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.956440 kubelet[3607]: I0129 12:05:25.956153 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-cni-log-dir\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.956440 kubelet[3607]: I0129 12:05:25.956207 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-xtables-lock\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:25.956440 kubelet[3607]: I0129 12:05:25.956272 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d596934c-ca0e-497b-8307-03e9f3bb089b-lib-modules\") pod \"calico-node-n7g8f\" (UID: \"d596934c-ca0e-497b-8307-03e9f3bb089b\") " pod="calico-system/calico-node-n7g8f" Jan 29 12:05:26.074789 kubelet[3607]: E0129 12:05:26.073904 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.074789 kubelet[3607]: W0129 12:05:26.073941 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.074789 kubelet[3607]: E0129 12:05:26.073974 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.077352 kubelet[3607]: E0129 12:05:26.076951 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.077352 kubelet[3607]: W0129 12:05:26.076978 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.077352 kubelet[3607]: E0129 12:05:26.077007 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.080220 kubelet[3607]: E0129 12:05:26.078896 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.080220 kubelet[3607]: W0129 12:05:26.078921 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.080220 kubelet[3607]: E0129 12:05:26.079037 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.082628 kubelet[3607]: E0129 12:05:26.082602 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.084004 kubelet[3607]: W0129 12:05:26.083977 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.085823 kubelet[3607]: E0129 12:05:26.084113 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.093925 kubelet[3607]: E0129 12:05:26.093887 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.094339 kubelet[3607]: W0129 12:05:26.094304 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.094879 kubelet[3607]: E0129 12:05:26.094851 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.096910 kubelet[3607]: E0129 12:05:26.096888 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.097036 kubelet[3607]: W0129 12:05:26.097022 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.099121 kubelet[3607]: E0129 12:05:26.099092 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.099565 kubelet[3607]: E0129 12:05:26.099550 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.099667 kubelet[3607]: W0129 12:05:26.099655 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.100864 kubelet[3607]: E0129 12:05:26.100843 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.108899 kubelet[3607]: E0129 12:05:26.108860 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.109190 kubelet[3607]: W0129 12:05:26.109050 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.109190 kubelet[3607]: E0129 12:05:26.109081 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.114195 kubelet[3607]: E0129 12:05:26.114119 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.114195 kubelet[3607]: W0129 12:05:26.114146 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.114537 kubelet[3607]: E0129 12:05:26.114173 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.122684 kubelet[3607]: E0129 12:05:26.122401 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.122684 kubelet[3607]: W0129 12:05:26.122427 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.122684 kubelet[3607]: E0129 12:05:26.122455 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.125520 kubelet[3607]: E0129 12:05:26.125496 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.125640 kubelet[3607]: W0129 12:05:26.125624 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.128107 kubelet[3607]: E0129 12:05:26.128079 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.128267 kubelet[3607]: E0129 12:05:26.128120 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.128339 kubelet[3607]: W0129 12:05:26.128327 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.128790 kubelet[3607]: E0129 12:05:26.128628 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.128790 kubelet[3607]: W0129 12:05:26.128639 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.128790 kubelet[3607]: E0129 12:05:26.128654 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.128790 kubelet[3607]: E0129 12:05:26.128678 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.130042 kubelet[3607]: E0129 12:05:26.130027 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.130556 kubelet[3607]: W0129 12:05:26.130135 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.130556 kubelet[3607]: E0129 12:05:26.130234 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.148613 kubelet[3607]: E0129 12:05:26.147887 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.148613 kubelet[3607]: W0129 12:05:26.147910 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.148613 kubelet[3607]: E0129 12:05:26.147933 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.208792 containerd[2010]: time="2025-01-29T12:05:26.208747813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7g8f,Uid:d596934c-ca0e-497b-8307-03e9f3bb089b,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:26.211380 kubelet[3607]: I0129 12:05:26.211338 3607 topology_manager.go:215] "Topology Admit Handler" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" podNamespace="calico-system" podName="csi-node-driver-ht95p" Jan 29 12:05:26.212598 kubelet[3607]: E0129 12:05:26.211690 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:26.312343 kubelet[3607]: E0129 12:05:26.312162 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.312343 kubelet[3607]: W0129 12:05:26.312189 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.312343 kubelet[3607]: E0129 12:05:26.312217 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.312653 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332079 kubelet[3607]: W0129 12:05:26.312667 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.312700 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.312974 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332079 kubelet[3607]: W0129 12:05:26.312986 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.313015 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.313370 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332079 kubelet[3607]: W0129 12:05:26.313385 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.313402 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332079 kubelet[3607]: E0129 12:05:26.315006 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332490 kubelet[3607]: W0129 12:05:26.315024 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315041 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315337 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332490 kubelet[3607]: W0129 12:05:26.315346 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315359 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315586 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332490 kubelet[3607]: W0129 12:05:26.315594 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315604 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.332490 kubelet[3607]: E0129 12:05:26.315973 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.332490 kubelet[3607]: W0129 12:05:26.315985 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.316998 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.317753 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.336414 kubelet[3607]: W0129 12:05:26.317778 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.317793 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.318097 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.336414 kubelet[3607]: W0129 12:05:26.318108 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.318122 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.318371 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.336414 kubelet[3607]: W0129 12:05:26.318383 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.336414 kubelet[3607]: E0129 12:05:26.318396 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.320883 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.337400 kubelet[3607]: W0129 12:05:26.320899 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.320952 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.322003 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.337400 kubelet[3607]: W0129 12:05:26.322021 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.322071 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.322467 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.337400 kubelet[3607]: W0129 12:05:26.322513 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.322530 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.337400 kubelet[3607]: E0129 12:05:26.323006 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.339826 kubelet[3607]: W0129 12:05:26.323021 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.323069 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.323705 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.339826 kubelet[3607]: W0129 12:05:26.323718 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.323732 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.324189 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.339826 kubelet[3607]: W0129 12:05:26.324201 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.324216 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.339826 kubelet[3607]: E0129 12:05:26.324471 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.339826 kubelet[3607]: W0129 12:05:26.324483 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.341314 kubelet[3607]: E0129 12:05:26.324497 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.341314 kubelet[3607]: E0129 12:05:26.324756 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.341314 kubelet[3607]: W0129 12:05:26.324766 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.341314 kubelet[3607]: E0129 12:05:26.324778 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.341314 kubelet[3607]: E0129 12:05:26.325084 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.341314 kubelet[3607]: W0129 12:05:26.325095 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.341314 kubelet[3607]: E0129 12:05:26.325109 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.348162 containerd[2010]: time="2025-01-29T12:05:26.348060032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6569989c64-597x9,Uid:dfa413c7-004d-4a9c-acf7-8a54fa40e796,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:26.370988 kubelet[3607]: E0129 12:05:26.368205 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.370988 kubelet[3607]: W0129 12:05:26.368233 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.370988 kubelet[3607]: E0129 12:05:26.368261 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.370988 kubelet[3607]: I0129 12:05:26.368303 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e3ae492-9704-4aa3-aacf-00b3ecf4f562-socket-dir\") pod \"csi-node-driver-ht95p\" (UID: \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\") " pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:26.370988 kubelet[3607]: E0129 12:05:26.369423 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.370988 kubelet[3607]: W0129 12:05:26.369444 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.370988 kubelet[3607]: E0129 12:05:26.369470 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.370988 kubelet[3607]: I0129 12:05:26.369501 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dzk\" (UniqueName: \"kubernetes.io/projected/6e3ae492-9704-4aa3-aacf-00b3ecf4f562-kube-api-access-p9dzk\") pod \"csi-node-driver-ht95p\" (UID: \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\") " pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:26.372289 kubelet[3607]: E0129 12:05:26.372177 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.372289 kubelet[3607]: W0129 12:05:26.372204 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.372289 kubelet[3607]: E0129 12:05:26.372248 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.372699 kubelet[3607]: I0129 12:05:26.372586 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e3ae492-9704-4aa3-aacf-00b3ecf4f562-varrun\") pod \"csi-node-driver-ht95p\" (UID: \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\") " pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:26.373370 kubelet[3607]: E0129 12:05:26.373345 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.373370 kubelet[3607]: W0129 12:05:26.373366 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.373574 kubelet[3607]: E0129 12:05:26.373545 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.377001 kubelet[3607]: E0129 12:05:26.376971 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.377001 kubelet[3607]: W0129 12:05:26.376994 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.380422 kubelet[3607]: E0129 12:05:26.380376 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.381313 kubelet[3607]: E0129 12:05:26.381263 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.381313 kubelet[3607]: W0129 12:05:26.381287 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.385113 kubelet[3607]: E0129 12:05:26.384917 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.385113 kubelet[3607]: E0129 12:05:26.385002 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.385113 kubelet[3607]: W0129 12:05:26.385016 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.389392 kubelet[3607]: E0129 12:05:26.388574 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.389392 kubelet[3607]: W0129 12:05:26.388599 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.391216 kubelet[3607]: E0129 12:05:26.390958 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.391216 kubelet[3607]: I0129 12:05:26.391009 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e3ae492-9704-4aa3-aacf-00b3ecf4f562-registration-dir\") pod \"csi-node-driver-ht95p\" (UID: \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\") " pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:26.391216 kubelet[3607]: E0129 12:05:26.391057 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.393572 kubelet[3607]: E0129 12:05:26.393307 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.393572 kubelet[3607]: W0129 12:05:26.393332 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.393572 kubelet[3607]: E0129 12:05:26.393359 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.395922 kubelet[3607]: E0129 12:05:26.395136 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.395922 kubelet[3607]: W0129 12:05:26.395152 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.395922 kubelet[3607]: E0129 12:05:26.395177 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.395922 kubelet[3607]: I0129 12:05:26.395208 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e3ae492-9704-4aa3-aacf-00b3ecf4f562-kubelet-dir\") pod \"csi-node-driver-ht95p\" (UID: \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\") " pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:26.397394 kubelet[3607]: E0129 12:05:26.397364 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.397394 kubelet[3607]: W0129 12:05:26.397384 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.397496 kubelet[3607]: E0129 12:05:26.397421 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.398982 kubelet[3607]: E0129 12:05:26.397952 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.398982 kubelet[3607]: W0129 12:05:26.397965 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.398982 kubelet[3607]: E0129 12:05:26.397984 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.401599 kubelet[3607]: E0129 12:05:26.400786 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.401599 kubelet[3607]: W0129 12:05:26.400831 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.401599 kubelet[3607]: E0129 12:05:26.400857 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.401599 kubelet[3607]: E0129 12:05:26.401396 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.401599 kubelet[3607]: W0129 12:05:26.401422 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.401599 kubelet[3607]: E0129 12:05:26.401438 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.403535 kubelet[3607]: E0129 12:05:26.402192 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.403535 kubelet[3607]: W0129 12:05:26.402208 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.403535 kubelet[3607]: E0129 12:05:26.402223 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.475050 containerd[2010]: time="2025-01-29T12:05:26.473717373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:26.475050 containerd[2010]: time="2025-01-29T12:05:26.474694120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:26.475050 containerd[2010]: time="2025-01-29T12:05:26.474765696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:26.475050 containerd[2010]: time="2025-01-29T12:05:26.474923913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:26.503552 kubelet[3607]: E0129 12:05:26.503030 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.503552 kubelet[3607]: W0129 12:05:26.503339 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.503552 kubelet[3607]: E0129 12:05:26.503370 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.507134 kubelet[3607]: E0129 12:05:26.506485 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.507134 kubelet[3607]: W0129 12:05:26.506503 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.507134 kubelet[3607]: E0129 12:05:26.506640 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.510289 kubelet[3607]: E0129 12:05:26.510270 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.510587 kubelet[3607]: W0129 12:05:26.510559 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.510821 kubelet[3607]: E0129 12:05:26.510778 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.512281 kubelet[3607]: E0129 12:05:26.512256 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.512431 kubelet[3607]: W0129 12:05:26.512413 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.513547 kubelet[3607]: E0129 12:05:26.513517 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.514064 kubelet[3607]: E0129 12:05:26.513991 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.514214 kubelet[3607]: W0129 12:05:26.514199 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.515953 kubelet[3607]: E0129 12:05:26.515256 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.515953 kubelet[3607]: E0129 12:05:26.515753 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.515953 kubelet[3607]: W0129 12:05:26.515765 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.515953 kubelet[3607]: E0129 12:05:26.515870 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.516895 kubelet[3607]: E0129 12:05:26.516766 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.516895 kubelet[3607]: W0129 12:05:26.516781 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.518488 kubelet[3607]: E0129 12:05:26.518041 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.518488 kubelet[3607]: E0129 12:05:26.518159 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.518488 kubelet[3607]: W0129 12:05:26.518187 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.518488 kubelet[3607]: E0129 12:05:26.518268 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.519140 containerd[2010]: time="2025-01-29T12:05:26.517523827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:26.519140 containerd[2010]: time="2025-01-29T12:05:26.517616894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:26.519140 containerd[2010]: time="2025-01-29T12:05:26.517641322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:26.519140 containerd[2010]: time="2025-01-29T12:05:26.519018241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:26.519461 kubelet[3607]: E0129 12:05:26.518873 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.519461 kubelet[3607]: W0129 12:05:26.518884 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.522836 kubelet[3607]: E0129 12:05:26.519853 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.522836 kubelet[3607]: E0129 12:05:26.520189 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.522836 kubelet[3607]: W0129 12:05:26.520200 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.522836 kubelet[3607]: E0129 12:05:26.522302 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.523369 kubelet[3607]: E0129 12:05:26.523243 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.523369 kubelet[3607]: W0129 12:05:26.523260 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.523723 kubelet[3607]: E0129 12:05:26.523629 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.523911 kubelet[3607]: E0129 12:05:26.523890 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.524010 kubelet[3607]: W0129 12:05:26.523992 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.524180 kubelet[3607]: E0129 12:05:26.524167 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.524552 kubelet[3607]: E0129 12:05:26.524540 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.524653 kubelet[3607]: W0129 12:05:26.524640 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.525997 kubelet[3607]: E0129 12:05:26.525477 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.526126 kubelet[3607]: E0129 12:05:26.526115 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.526273 kubelet[3607]: W0129 12:05:26.526193 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.526371 kubelet[3607]: E0129 12:05:26.526357 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.527292 kubelet[3607]: E0129 12:05:26.527002 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.527292 kubelet[3607]: W0129 12:05:26.527014 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.527292 kubelet[3607]: E0129 12:05:26.527240 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.527894 kubelet[3607]: E0129 12:05:26.527837 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.527894 kubelet[3607]: W0129 12:05:26.527856 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.528232 kubelet[3607]: E0129 12:05:26.527955 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.528232 kubelet[3607]: E0129 12:05:26.528100 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.528232 kubelet[3607]: W0129 12:05:26.528109 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.528232 kubelet[3607]: E0129 12:05:26.528188 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.529498 kubelet[3607]: E0129 12:05:26.528406 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.529498 kubelet[3607]: W0129 12:05:26.528416 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.529498 kubelet[3607]: E0129 12:05:26.528516 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.529498 kubelet[3607]: E0129 12:05:26.528683 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.529498 kubelet[3607]: W0129 12:05:26.528692 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.529498 kubelet[3607]: E0129 12:05:26.528979 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.529498 kubelet[3607]: W0129 12:05:26.528990 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.530327 kubelet[3607]: E0129 12:05:26.529828 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.530327 kubelet[3607]: E0129 12:05:26.530290 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.532252 kubelet[3607]: E0129 12:05:26.531863 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.532252 kubelet[3607]: W0129 12:05:26.531879 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.532252 kubelet[3607]: E0129 12:05:26.532003 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.535746 kubelet[3607]: E0129 12:05:26.535721 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.535746 kubelet[3607]: W0129 12:05:26.535745 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.536822 kubelet[3607]: E0129 12:05:26.536748 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.542166 kubelet[3607]: E0129 12:05:26.539890 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.542166 kubelet[3607]: W0129 12:05:26.539912 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.542166 kubelet[3607]: E0129 12:05:26.542093 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.551208 kubelet[3607]: E0129 12:05:26.550671 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.551208 kubelet[3607]: W0129 12:05:26.550699 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.551208 kubelet[3607]: E0129 12:05:26.550728 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.553875 kubelet[3607]: E0129 12:05:26.553727 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.553875 kubelet[3607]: W0129 12:05:26.553753 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.553875 kubelet[3607]: E0129 12:05:26.553778 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.566681 kubelet[3607]: E0129 12:05:26.566650 3607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:05:26.566681 kubelet[3607]: W0129 12:05:26.566679 3607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:05:26.566891 kubelet[3607]: E0129 12:05:26.566705 3607 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:05:26.731641 containerd[2010]: time="2025-01-29T12:05:26.731586940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7g8f,Uid:d596934c-ca0e-497b-8307-03e9f3bb089b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\"" Jan 29 12:05:26.736386 containerd[2010]: time="2025-01-29T12:05:26.736339685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:05:26.836899 containerd[2010]: time="2025-01-29T12:05:26.836225957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6569989c64-597x9,Uid:dfa413c7-004d-4a9c-acf7-8a54fa40e796,Namespace:calico-system,Attempt:0,} returns sandbox id \"0852e1f4c96cbe235f6bafee5f24757b82221d73b4fa5f40b5a9d2042f962f2f\"" Jan 29 12:05:27.771319 kubelet[3607]: E0129 12:05:27.769765 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:28.484534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379653680.mount: Deactivated successfully. Jan 29 12:05:28.758013 containerd[2010]: time="2025-01-29T12:05:28.757890428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:28.761726 containerd[2010]: time="2025-01-29T12:05:28.760101436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 12:05:28.781723 containerd[2010]: time="2025-01-29T12:05:28.781647859Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:28.791935 containerd[2010]: time="2025-01-29T12:05:28.791889593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:28.792981 containerd[2010]: time="2025-01-29T12:05:28.792939164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.056313273s" Jan 29 12:05:28.793094 containerd[2010]: time="2025-01-29T12:05:28.792988101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:05:28.797462 containerd[2010]: time="2025-01-29T12:05:28.797426418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:05:28.801533 containerd[2010]: time="2025-01-29T12:05:28.801494693Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:05:28.853421 containerd[2010]: time="2025-01-29T12:05:28.853371836Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b\"" Jan 29 12:05:28.867964 containerd[2010]: time="2025-01-29T12:05:28.866745166Z" level=info msg="StartContainer for \"aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b\"" Jan 29 12:05:28.989050 containerd[2010]: time="2025-01-29T12:05:28.988994872Z" level=info msg="StartContainer for \"aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b\" returns successfully" Jan 29 12:05:29.083561 containerd[2010]: time="2025-01-29T12:05:29.045370001Z" level=info msg="shim disconnected" id=aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b namespace=k8s.io Jan 29 12:05:29.083561 containerd[2010]: time="2025-01-29T12:05:29.083469030Z" level=warning msg="cleaning up after shim disconnected" id=aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b namespace=k8s.io Jan 29 12:05:29.083561 containerd[2010]: time="2025-01-29T12:05:29.083489846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:29.441779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeeb20829ccfe8d756b8899d0ad145af4bbcc283c4b55d87d44ac465fea5974b-rootfs.mount: Deactivated successfully. Jan 29 12:05:29.770636 kubelet[3607]: E0129 12:05:29.770319 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:31.773486 kubelet[3607]: E0129 12:05:31.772860 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:32.331918 containerd[2010]: time="2025-01-29T12:05:32.331866203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:32.339854 containerd[2010]: time="2025-01-29T12:05:32.335934571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 12:05:32.342525 containerd[2010]: time="2025-01-29T12:05:32.342455100Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:32.359398 containerd[2010]: time="2025-01-29T12:05:32.358534684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:32.362831 containerd[2010]: time="2025-01-29T12:05:32.361399774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.563931213s" Jan 29 12:05:32.362831 containerd[2010]: time="2025-01-29T12:05:32.361452802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:05:32.365701 containerd[2010]: time="2025-01-29T12:05:32.365621839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:05:32.400718 containerd[2010]: time="2025-01-29T12:05:32.400568169Z" level=info msg="CreateContainer within sandbox \"0852e1f4c96cbe235f6bafee5f24757b82221d73b4fa5f40b5a9d2042f962f2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:05:32.443913 containerd[2010]: time="2025-01-29T12:05:32.443779675Z" level=info msg="CreateContainer within sandbox \"0852e1f4c96cbe235f6bafee5f24757b82221d73b4fa5f40b5a9d2042f962f2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f8554438dafb143963c6cd46aeea9c1393dfa4e2711d0d59dceb5c0adf1135e\"" Jan 29 12:05:32.446859 containerd[2010]: time="2025-01-29T12:05:32.446764888Z" level=info msg="StartContainer for \"2f8554438dafb143963c6cd46aeea9c1393dfa4e2711d0d59dceb5c0adf1135e\"" Jan 29 12:05:32.600350 containerd[2010]: time="2025-01-29T12:05:32.597161603Z" level=info msg="StartContainer for \"2f8554438dafb143963c6cd46aeea9c1393dfa4e2711d0d59dceb5c0adf1135e\" returns successfully" Jan 29 12:05:32.975110 kubelet[3607]: I0129 12:05:32.975021 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6569989c64-597x9" podStartSLOduration=2.474689984 podStartE2EDuration="7.974995497s" podCreationTimestamp="2025-01-29 12:05:25 +0000 UTC" firstStartedPulling="2025-01-29 12:05:26.86347463 +0000 UTC m=+23.296112512" lastFinishedPulling="2025-01-29 12:05:32.363780134 +0000 UTC m=+28.796418025" observedRunningTime="2025-01-29 12:05:32.95793236 +0000 UTC m=+29.390570257" watchObservedRunningTime="2025-01-29 12:05:32.974995497 +0000 UTC m=+29.407633396" Jan 29 12:05:33.769758 kubelet[3607]: E0129 12:05:33.769699 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:35.773613 kubelet[3607]: E0129 12:05:35.772148 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:37.770786 kubelet[3607]: E0129 12:05:37.770741 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:39.770036 kubelet[3607]: E0129 12:05:39.769934 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:41.770850 kubelet[3607]: E0129 12:05:41.770263 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:43.416263 containerd[2010]: time="2025-01-29T12:05:43.416212029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:43.481723 containerd[2010]: time="2025-01-29T12:05:43.481565992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:05:43.496012 containerd[2010]: time="2025-01-29T12:05:43.495554614Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:43.629309 containerd[2010]: time="2025-01-29T12:05:43.629256299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:43.631023 containerd[2010]: time="2025-01-29T12:05:43.630868394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 11.26517278s" Jan 29 12:05:43.631023 containerd[2010]: time="2025-01-29T12:05:43.630912920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:05:43.747609 containerd[2010]: time="2025-01-29T12:05:43.747568198Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:05:43.771549 kubelet[3607]: E0129 12:05:43.770604 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:45.771224 kubelet[3607]: E0129 12:05:45.771178 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:45.980041 containerd[2010]: time="2025-01-29T12:05:45.979930770Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc\"" Jan 29 12:05:45.984170 containerd[2010]: time="2025-01-29T12:05:45.984062406Z" level=info msg="StartContainer for \"f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc\"" Jan 29 12:05:46.140725 systemd[1]: run-containerd-runc-k8s.io-f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc-runc.PhAjTN.mount: Deactivated successfully. Jan 29 12:05:46.267741 containerd[2010]: time="2025-01-29T12:05:46.267693023Z" level=info msg="StartContainer for \"f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc\" returns successfully" Jan 29 12:05:46.784914 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:05:46.820745 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:05:46.784984 systemd-resolved[1893]: Flushed all caches. Jan 29 12:05:47.290221 systemd[1]: Started sshd@7-172.31.19.14:22-139.178.68.195:48342.service - OpenSSH per-connection server daemon (139.178.68.195:48342). Jan 29 12:05:47.771598 kubelet[3607]: E0129 12:05:47.770228 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:47.951610 sshd[4314]: Accepted publickey for core from 139.178.68.195 port 48342 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:47.957525 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:47.987220 systemd-logind[1987]: New session 8 of user core. Jan 29 12:05:47.994589 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:05:48.753741 sshd[4314]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:48.758952 systemd[1]: sshd@7-172.31.19.14:22-139.178.68.195:48342.service: Deactivated successfully. Jan 29 12:05:48.761456 systemd-logind[1987]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:05:48.769827 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:05:48.771656 systemd-logind[1987]: Removed session 8. Jan 29 12:05:48.833154 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:05:48.833184 systemd-resolved[1893]: Flushed all caches. Jan 29 12:05:48.834986 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:05:49.770709 kubelet[3607]: E0129 12:05:49.770500 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:50.881228 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:05:50.881256 systemd-resolved[1893]: Flushed all caches. Jan 29 12:05:50.882824 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:05:51.770994 kubelet[3607]: E0129 12:05:51.770934 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:52.126968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc-rootfs.mount: Deactivated successfully. Jan 29 12:05:52.153640 kubelet[3607]: I0129 12:05:52.153457 3607 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:05:52.295631 kubelet[3607]: I0129 12:05:52.295472 3607 topology_manager.go:215] "Topology Admit Handler" podUID="bbdf3301-3539-4945-83b3-f31451672e0c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6glqj" Jan 29 12:05:52.296278 kubelet[3607]: I0129 12:05:52.295942 3607 topology_manager.go:215] "Topology Admit Handler" podUID="f814fb92-3f75-45fa-afb5-f59e7f19b575" podNamespace="calico-system" podName="calico-kube-controllers-f7c9d9464-548vs" Jan 29 12:05:52.299356 kubelet[3607]: I0129 12:05:52.299317 3607 topology_manager.go:215] "Topology Admit Handler" podUID="393c076b-4fd5-42ce-ac5b-7c010e93a9f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8nm9g" Jan 29 12:05:52.316137 containerd[2010]: time="2025-01-29T12:05:52.315732769Z" level=info msg="shim disconnected" id=f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc namespace=k8s.io Jan 29 12:05:52.316137 containerd[2010]: time="2025-01-29T12:05:52.315852227Z" level=warning msg="cleaning up after shim disconnected" id=f90c69f6b765c60df0c95e499c05de41aa98d7cfa2d985236548f96551aab7dc namespace=k8s.io Jan 29 12:05:52.316137 containerd[2010]: time="2025-01-29T12:05:52.315866122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:52.319779 kubelet[3607]: I0129 12:05:52.319731 3607 topology_manager.go:215] "Topology Admit Handler" podUID="3a4175b0-1c11-4e3d-bc96-94db0994a2b9" podNamespace="calico-apiserver" podName="calico-apiserver-5979bcbbd4-bfmd4" Jan 29 12:05:52.322822 kubelet[3607]: I0129 12:05:52.322180 3607 topology_manager.go:215] "Topology Admit Handler" podUID="4a677651-8370-4ade-886d-e86025868e97" podNamespace="calico-apiserver" podName="calico-apiserver-5979bcbbd4-mbmww" Jan 29 12:05:52.409085 kubelet[3607]: I0129 12:05:52.408766 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbdf3301-3539-4945-83b3-f31451672e0c-config-volume\") pod \"coredns-7db6d8ff4d-6glqj\" (UID: \"bbdf3301-3539-4945-83b3-f31451672e0c\") " pod="kube-system/coredns-7db6d8ff4d-6glqj" Jan 29 12:05:52.409085 kubelet[3607]: I0129 12:05:52.408832 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cnd\" (UniqueName: \"kubernetes.io/projected/f814fb92-3f75-45fa-afb5-f59e7f19b575-kube-api-access-45cnd\") pod \"calico-kube-controllers-f7c9d9464-548vs\" (UID: \"f814fb92-3f75-45fa-afb5-f59e7f19b575\") " pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" Jan 29 12:05:52.409085 kubelet[3607]: I0129 12:05:52.408864 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c076b-4fd5-42ce-ac5b-7c010e93a9f4-config-volume\") pod \"coredns-7db6d8ff4d-8nm9g\" (UID: \"393c076b-4fd5-42ce-ac5b-7c010e93a9f4\") " pod="kube-system/coredns-7db6d8ff4d-8nm9g" Jan 29 12:05:52.409085 kubelet[3607]: I0129 12:05:52.408897 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzx8n\" (UniqueName: \"kubernetes.io/projected/393c076b-4fd5-42ce-ac5b-7c010e93a9f4-kube-api-access-hzx8n\") pod \"coredns-7db6d8ff4d-8nm9g\" (UID: \"393c076b-4fd5-42ce-ac5b-7c010e93a9f4\") " pod="kube-system/coredns-7db6d8ff4d-8nm9g" Jan 29 12:05:52.409085 kubelet[3607]: I0129 12:05:52.408926 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f814fb92-3f75-45fa-afb5-f59e7f19b575-tigera-ca-bundle\") pod \"calico-kube-controllers-f7c9d9464-548vs\" (UID: \"f814fb92-3f75-45fa-afb5-f59e7f19b575\") " pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" Jan 29 12:05:52.409623 kubelet[3607]: I0129 12:05:52.408954 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpkd\" (UniqueName: \"kubernetes.io/projected/bbdf3301-3539-4945-83b3-f31451672e0c-kube-api-access-wrpkd\") pod \"coredns-7db6d8ff4d-6glqj\" (UID: \"bbdf3301-3539-4945-83b3-f31451672e0c\") " pod="kube-system/coredns-7db6d8ff4d-6glqj" Jan 29 12:05:52.510086 kubelet[3607]: I0129 12:05:52.510038 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf7p2\" (UniqueName: \"kubernetes.io/projected/3a4175b0-1c11-4e3d-bc96-94db0994a2b9-kube-api-access-zf7p2\") pod \"calico-apiserver-5979bcbbd4-bfmd4\" (UID: \"3a4175b0-1c11-4e3d-bc96-94db0994a2b9\") " pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" Jan 29 12:05:52.512980 kubelet[3607]: I0129 12:05:52.510159 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a677651-8370-4ade-886d-e86025868e97-calico-apiserver-certs\") pod \"calico-apiserver-5979bcbbd4-mbmww\" (UID: \"4a677651-8370-4ade-886d-e86025868e97\") " pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" Jan 29 12:05:52.512980 kubelet[3607]: I0129 12:05:52.510191 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbdsm\" (UniqueName: \"kubernetes.io/projected/4a677651-8370-4ade-886d-e86025868e97-kube-api-access-mbdsm\") pod \"calico-apiserver-5979bcbbd4-mbmww\" (UID: \"4a677651-8370-4ade-886d-e86025868e97\") " pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" Jan 29 12:05:52.512980 kubelet[3607]: I0129 12:05:52.510423 3607 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a4175b0-1c11-4e3d-bc96-94db0994a2b9-calico-apiserver-certs\") pod \"calico-apiserver-5979bcbbd4-bfmd4\" (UID: \"3a4175b0-1c11-4e3d-bc96-94db0994a2b9\") " pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" Jan 29 12:05:52.638437 containerd[2010]: time="2025-01-29T12:05:52.638394711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6glqj,Uid:bbdf3301-3539-4945-83b3-f31451672e0c,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:52.642736 containerd[2010]: time="2025-01-29T12:05:52.642643571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9d9464-548vs,Uid:f814fb92-3f75-45fa-afb5-f59e7f19b575,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:52.649758 containerd[2010]: time="2025-01-29T12:05:52.649715062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nm9g,Uid:393c076b-4fd5-42ce-ac5b-7c010e93a9f4,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:52.653018 containerd[2010]: time="2025-01-29T12:05:52.652944215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-mbmww,Uid:4a677651-8370-4ade-886d-e86025868e97,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:05:52.667208 containerd[2010]: time="2025-01-29T12:05:52.667044317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-bfmd4,Uid:3a4175b0-1c11-4e3d-bc96-94db0994a2b9,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:05:53.085240 containerd[2010]: time="2025-01-29T12:05:53.085201790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:05:53.253201 containerd[2010]: time="2025-01-29T12:05:53.253140298Z" level=error msg="Failed to destroy network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.260036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4-shm.mount: Deactivated successfully. Jan 29 12:05:53.268167 containerd[2010]: time="2025-01-29T12:05:53.263493061Z" level=error msg="Failed to destroy network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.274152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491-shm.mount: Deactivated successfully. Jan 29 12:05:53.277136 containerd[2010]: time="2025-01-29T12:05:53.277077119Z" level=error msg="encountered an error cleaning up failed sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.280854 containerd[2010]: time="2025-01-29T12:05:53.280048003Z" level=error msg="encountered an error cleaning up failed sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.291587 containerd[2010]: time="2025-01-29T12:05:53.290161974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-bfmd4,Uid:3a4175b0-1c11-4e3d-bc96-94db0994a2b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.294536 containerd[2010]: time="2025-01-29T12:05:53.294478404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9d9464-548vs,Uid:f814fb92-3f75-45fa-afb5-f59e7f19b575,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.300854 containerd[2010]: time="2025-01-29T12:05:53.300719068Z" level=error msg="Failed to destroy network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.303825 containerd[2010]: time="2025-01-29T12:05:53.301154848Z" level=error msg="encountered an error cleaning up failed sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.303825 containerd[2010]: time="2025-01-29T12:05:53.301217144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6glqj,Uid:bbdf3301-3539-4945-83b3-f31451672e0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.303825 containerd[2010]: time="2025-01-29T12:05:53.301368377Z" level=error msg="Failed to destroy network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.303825 containerd[2010]: time="2025-01-29T12:05:53.302203137Z" level=error msg="encountered an error cleaning up failed sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.303825 containerd[2010]: time="2025-01-29T12:05:53.302258053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-mbmww,Uid:4a677651-8370-4ade-886d-e86025868e97,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.304190 kubelet[3607]: E0129 12:05:53.303242 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.304190 kubelet[3607]: E0129 12:05:53.303370 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" Jan 29 12:05:53.304190 kubelet[3607]: E0129 12:05:53.303401 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" Jan 29 12:05:53.306177 kubelet[3607]: E0129 12:05:53.303454 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5979bcbbd4-mbmww_calico-apiserver(4a677651-8370-4ade-886d-e86025868e97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5979bcbbd4-mbmww_calico-apiserver(4a677651-8370-4ade-886d-e86025868e97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" podUID="4a677651-8370-4ade-886d-e86025868e97" Jan 29 12:05:53.306177 kubelet[3607]: E0129 12:05:53.304935 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.306177 kubelet[3607]: E0129 12:05:53.305006 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" Jan 29 12:05:53.306425 kubelet[3607]: E0129 12:05:53.305035 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" Jan 29 12:05:53.306425 kubelet[3607]: E0129 12:05:53.305110 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7c9d9464-548vs_calico-system(f814fb92-3f75-45fa-afb5-f59e7f19b575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7c9d9464-548vs_calico-system(f814fb92-3f75-45fa-afb5-f59e7f19b575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" podUID="f814fb92-3f75-45fa-afb5-f59e7f19b575" Jan 29 12:05:53.306425 kubelet[3607]: E0129 12:05:53.305160 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.306583 kubelet[3607]: E0129 12:05:53.305189 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" Jan 29 12:05:53.306583 kubelet[3607]: E0129 12:05:53.305213 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" Jan 29 12:05:53.306583 kubelet[3607]: E0129 12:05:53.305249 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5979bcbbd4-bfmd4_calico-apiserver(3a4175b0-1c11-4e3d-bc96-94db0994a2b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5979bcbbd4-bfmd4_calico-apiserver(3a4175b0-1c11-4e3d-bc96-94db0994a2b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" podUID="3a4175b0-1c11-4e3d-bc96-94db0994a2b9" Jan 29 12:05:53.306728 kubelet[3607]: E0129 12:05:53.305301 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.306728 kubelet[3607]: E0129 12:05:53.305328 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6glqj" Jan 29 12:05:53.306728 kubelet[3607]: E0129 12:05:53.305352 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6glqj" Jan 29 12:05:53.307344 kubelet[3607]: E0129 12:05:53.305395 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6glqj_kube-system(bbdf3301-3539-4945-83b3-f31451672e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6glqj_kube-system(bbdf3301-3539-4945-83b3-f31451672e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6glqj" podUID="bbdf3301-3539-4945-83b3-f31451672e0c" Jan 29 12:05:53.309997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311-shm.mount: Deactivated successfully. Jan 29 12:05:53.317407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65-shm.mount: Deactivated successfully. Jan 29 12:05:53.343730 containerd[2010]: time="2025-01-29T12:05:53.341842420Z" level=error msg="Failed to destroy network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.347659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40-shm.mount: Deactivated successfully. Jan 29 12:05:53.347836 containerd[2010]: time="2025-01-29T12:05:53.346491526Z" level=error msg="encountered an error cleaning up failed sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.347836 containerd[2010]: time="2025-01-29T12:05:53.347763254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nm9g,Uid:393c076b-4fd5-42ce-ac5b-7c010e93a9f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.351358 kubelet[3607]: E0129 12:05:53.350009 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.351358 kubelet[3607]: E0129 12:05:53.350073 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8nm9g" Jan 29 12:05:53.351358 kubelet[3607]: E0129 12:05:53.350100 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8nm9g" Jan 29 12:05:53.351631 kubelet[3607]: E0129 12:05:53.350162 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8nm9g_kube-system(393c076b-4fd5-42ce-ac5b-7c010e93a9f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8nm9g_kube-system(393c076b-4fd5-42ce-ac5b-7c010e93a9f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8nm9g" podUID="393c076b-4fd5-42ce-ac5b-7c010e93a9f4" Jan 29 12:05:53.774855 containerd[2010]: time="2025-01-29T12:05:53.774491958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht95p,Uid:6e3ae492-9704-4aa3-aacf-00b3ecf4f562,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:53.786413 systemd[1]: Started sshd@8-172.31.19.14:22-139.178.68.195:48352.service - OpenSSH per-connection server daemon (139.178.68.195:48352). Jan 29 12:05:53.908642 containerd[2010]: time="2025-01-29T12:05:53.908457032Z" level=error msg="Failed to destroy network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.909868 containerd[2010]: time="2025-01-29T12:05:53.909695165Z" level=error msg="encountered an error cleaning up failed sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.909868 containerd[2010]: time="2025-01-29T12:05:53.909772556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht95p,Uid:6e3ae492-9704-4aa3-aacf-00b3ecf4f562,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.910297 kubelet[3607]: E0129 12:05:53.910253 3607 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:53.910433 kubelet[3607]: E0129 12:05:53.910321 3607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:53.910433 kubelet[3607]: E0129 12:05:53.910347 3607 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht95p" Jan 29 12:05:53.910831 kubelet[3607]: E0129 12:05:53.910415 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht95p_calico-system(6e3ae492-9704-4aa3-aacf-00b3ecf4f562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht95p_calico-system(6e3ae492-9704-4aa3-aacf-00b3ecf4f562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:53.994758 sshd[4521]: Accepted publickey for core from 139.178.68.195 port 48352 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:53.997484 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:54.008348 systemd-logind[1987]: New session 9 of user core. Jan 29 12:05:54.014247 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:05:54.083774 kubelet[3607]: I0129 12:05:54.083415 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:05:54.091861 kubelet[3607]: I0129 12:05:54.091084 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:05:54.104125 containerd[2010]: time="2025-01-29T12:05:54.103864274Z" level=info msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" Jan 29 12:05:54.106564 containerd[2010]: time="2025-01-29T12:05:54.106199830Z" level=info msg="Ensure that sandbox 4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491 in task-service has been cleanup successfully" Jan 29 12:05:54.113470 containerd[2010]: time="2025-01-29T12:05:54.113409071Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:05:54.114108 containerd[2010]: time="2025-01-29T12:05:54.113753660Z" level=info msg="Ensure that sandbox c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65 in task-service has been cleanup successfully" Jan 29 12:05:54.127287 kubelet[3607]: I0129 12:05:54.126224 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:05:54.134774 containerd[2010]: time="2025-01-29T12:05:54.134727610Z" level=info msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" Jan 29 12:05:54.135388 containerd[2010]: time="2025-01-29T12:05:54.135325672Z" level=info msg="Ensure that sandbox 66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311 in task-service has been cleanup successfully" Jan 29 12:05:54.141900 kubelet[3607]: I0129 12:05:54.141871 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:05:54.153552 containerd[2010]: time="2025-01-29T12:05:54.152948372Z" level=info msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" Jan 29 12:05:54.153552 containerd[2010]: time="2025-01-29T12:05:54.153171502Z" level=info msg="Ensure that sandbox 4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40 in task-service has been cleanup successfully" Jan 29 12:05:54.156654 kubelet[3607]: I0129 12:05:54.156570 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:05:54.167625 containerd[2010]: time="2025-01-29T12:05:54.164776956Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:05:54.168871 containerd[2010]: time="2025-01-29T12:05:54.167878782Z" level=info msg="Ensure that sandbox 06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f in task-service has been cleanup successfully" Jan 29 12:05:54.176019 kubelet[3607]: I0129 12:05:54.174944 3607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:05:54.181892 containerd[2010]: time="2025-01-29T12:05:54.181221392Z" level=info msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" Jan 29 12:05:54.181892 containerd[2010]: time="2025-01-29T12:05:54.181538754Z" level=info msg="Ensure that sandbox 7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4 in task-service has been cleanup successfully" Jan 29 12:05:54.473283 containerd[2010]: time="2025-01-29T12:05:54.473218263Z" level=error msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" failed" error="failed to destroy network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.478657 kubelet[3607]: E0129 12:05:54.478279 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:05:54.501121 sshd[4521]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:54.503288 kubelet[3607]: E0129 12:05:54.492405 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65"} Jan 29 12:05:54.503840 kubelet[3607]: E0129 12:05:54.503525 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbdf3301-3539-4945-83b3-f31451672e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.503840 kubelet[3607]: E0129 12:05:54.503568 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbdf3301-3539-4945-83b3-f31451672e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6glqj" podUID="bbdf3301-3539-4945-83b3-f31451672e0c" Jan 29 12:05:54.520817 kubelet[3607]: E0129 12:05:54.515631 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:05:54.520817 kubelet[3607]: E0129 12:05:54.515712 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311"} Jan 29 12:05:54.520817 kubelet[3607]: E0129 12:05:54.515778 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a677651-8370-4ade-886d-e86025868e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.520817 kubelet[3607]: E0129 12:05:54.515978 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a677651-8370-4ade-886d-e86025868e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" podUID="4a677651-8370-4ade-886d-e86025868e97" Jan 29 12:05:54.521206 containerd[2010]: time="2025-01-29T12:05:54.515281180Z" level=error msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" failed" error="failed to destroy network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.514978 systemd[1]: sshd@8-172.31.19.14:22-139.178.68.195:48352.service: Deactivated successfully. Jan 29 12:05:54.528243 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:05:54.528856 systemd-logind[1987]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:05:54.535349 systemd-logind[1987]: Removed session 9. Jan 29 12:05:54.570199 containerd[2010]: time="2025-01-29T12:05:54.570061896Z" level=error msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" failed" error="failed to destroy network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.571362 kubelet[3607]: E0129 12:05:54.570729 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:05:54.571362 kubelet[3607]: E0129 12:05:54.570784 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40"} Jan 29 12:05:54.571362 kubelet[3607]: E0129 12:05:54.570859 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"393c076b-4fd5-42ce-ac5b-7c010e93a9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.571362 kubelet[3607]: E0129 12:05:54.570891 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"393c076b-4fd5-42ce-ac5b-7c010e93a9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8nm9g" podUID="393c076b-4fd5-42ce-ac5b-7c010e93a9f4" Jan 29 12:05:54.613914 containerd[2010]: time="2025-01-29T12:05:54.613856928Z" level=error msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" failed" error="failed to destroy network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.614839 kubelet[3607]: E0129 12:05:54.614344 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:05:54.614839 kubelet[3607]: E0129 12:05:54.614405 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491"} Jan 29 12:05:54.614839 kubelet[3607]: E0129 12:05:54.614455 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a4175b0-1c11-4e3d-bc96-94db0994a2b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.614839 kubelet[3607]: E0129 12:05:54.614486 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a4175b0-1c11-4e3d-bc96-94db0994a2b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" podUID="3a4175b0-1c11-4e3d-bc96-94db0994a2b9" Jan 29 12:05:54.618669 containerd[2010]: time="2025-01-29T12:05:54.618495033Z" level=error msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" failed" error="failed to destroy network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.618980 kubelet[3607]: E0129 12:05:54.618765 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:05:54.618980 kubelet[3607]: E0129 12:05:54.618843 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4"} Jan 29 12:05:54.618980 kubelet[3607]: E0129 12:05:54.618883 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f814fb92-3f75-45fa-afb5-f59e7f19b575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.618980 kubelet[3607]: E0129 12:05:54.618920 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f814fb92-3f75-45fa-afb5-f59e7f19b575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" podUID="f814fb92-3f75-45fa-afb5-f59e7f19b575" Jan 29 12:05:54.620757 containerd[2010]: time="2025-01-29T12:05:54.620705261Z" level=error msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" failed" error="failed to destroy network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:54.621084 kubelet[3607]: E0129 12:05:54.621047 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:05:54.621183 kubelet[3607]: E0129 12:05:54.621100 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f"} Jan 29 12:05:54.621183 kubelet[3607]: E0129 12:05:54.621150 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:54.621347 kubelet[3607]: E0129 12:05:54.621179 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:05:59.542402 systemd[1]: Started sshd@9-172.31.19.14:22-139.178.68.195:43850.service - OpenSSH per-connection server daemon (139.178.68.195:43850). Jan 29 12:05:59.812331 sshd[4680]: Accepted publickey for core from 139.178.68.195 port 43850 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:59.815405 sshd[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:59.826740 systemd-logind[1987]: New session 10 of user core. Jan 29 12:05:59.834160 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:06:00.313748 sshd[4680]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:00.321413 systemd[1]: sshd@9-172.31.19.14:22-139.178.68.195:43850.service: Deactivated successfully. Jan 29 12:06:00.348867 systemd-logind[1987]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:06:00.364622 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:06:00.387195 systemd[1]: Started sshd@10-172.31.19.14:22-139.178.68.195:43866.service - OpenSSH per-connection server daemon (139.178.68.195:43866). Jan 29 12:06:00.395702 systemd-logind[1987]: Removed session 10. Jan 29 12:06:00.629786 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 43866 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:00.632346 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:00.649326 systemd-logind[1987]: New session 11 of user core. Jan 29 12:06:00.654745 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:06:00.801343 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:00.805218 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:00.801383 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:01.232031 sshd[4695]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:01.261002 systemd[1]: Started sshd@11-172.31.19.14:22-139.178.68.195:43880.service - OpenSSH per-connection server daemon (139.178.68.195:43880). Jan 29 12:06:01.277712 systemd[1]: sshd@10-172.31.19.14:22-139.178.68.195:43866.service: Deactivated successfully. Jan 29 12:06:01.292841 systemd-logind[1987]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:06:01.293668 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:06:01.303152 systemd-logind[1987]: Removed session 11. Jan 29 12:06:01.568036 sshd[4704]: Accepted publickey for core from 139.178.68.195 port 43880 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:01.571376 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:01.624817 systemd-logind[1987]: New session 12 of user core. Jan 29 12:06:01.670148 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:06:02.521847 sshd[4704]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:02.532030 systemd[1]: sshd@11-172.31.19.14:22-139.178.68.195:43880.service: Deactivated successfully. Jan 29 12:06:02.538872 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:06:02.540934 systemd-logind[1987]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:06:02.545260 systemd-logind[1987]: Removed session 12. Jan 29 12:06:02.853584 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:02.849886 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:02.849895 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:04.901034 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:04.897279 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:04.897289 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:05.634210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930882567.mount: Deactivated successfully. Jan 29 12:06:05.792761 containerd[2010]: time="2025-01-29T12:06:05.791794974Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:06:05.870116 containerd[2010]: time="2025-01-29T12:06:05.869860697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:06:05.870116 containerd[2010]: time="2025-01-29T12:06:05.869994389Z" level=error msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" failed" error="failed to destroy network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:06:05.879156 kubelet[3607]: E0129 12:06:05.879014 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:06:05.879156 kubelet[3607]: E0129 12:06:05.879109 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65"} Jan 29 12:06:05.879156 kubelet[3607]: E0129 12:06:05.879158 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbdf3301-3539-4945-83b3-f31451672e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:06:05.879917 kubelet[3607]: E0129 12:06:05.879191 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbdf3301-3539-4945-83b3-f31451672e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6glqj" podUID="bbdf3301-3539-4945-83b3-f31451672e0c" Jan 29 12:06:05.934636 containerd[2010]: time="2025-01-29T12:06:05.934490279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.799957448s" Jan 29 12:06:05.941931 containerd[2010]: time="2025-01-29T12:06:05.941697105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:06:05.954555 containerd[2010]: time="2025-01-29T12:06:05.954358603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:05.978975 containerd[2010]: time="2025-01-29T12:06:05.977625865Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:05.978975 containerd[2010]: time="2025-01-29T12:06:05.978719721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:05.995239 containerd[2010]: time="2025-01-29T12:06:05.995137577Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:06:06.087130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618215842.mount: Deactivated successfully. Jan 29 12:06:06.114460 containerd[2010]: time="2025-01-29T12:06:06.114407173Z" level=info msg="CreateContainer within sandbox \"b4161a176b6c90fc9f7fb62f690d2254a50450e9a5dd9e19432f883420fd39ba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f63eb1f74a8ff373a6dcc459f78fbaad119784785054f4d82d8c2875399e2a2b\"" Jan 29 12:06:06.116626 containerd[2010]: time="2025-01-29T12:06:06.115366520Z" level=info msg="StartContainer for \"f63eb1f74a8ff373a6dcc459f78fbaad119784785054f4d82d8c2875399e2a2b\"" Jan 29 12:06:06.519639 containerd[2010]: time="2025-01-29T12:06:06.519596110Z" level=info msg="StartContainer for \"f63eb1f74a8ff373a6dcc459f78fbaad119784785054f4d82d8c2875399e2a2b\" returns successfully" Jan 29 12:06:06.774814 containerd[2010]: time="2025-01-29T12:06:06.774613700Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:06:06.880054 containerd[2010]: time="2025-01-29T12:06:06.879999910Z" level=error msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" failed" error="failed to destroy network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:06:06.880648 kubelet[3607]: E0129 12:06:06.880233 3607 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:06:06.880648 kubelet[3607]: E0129 12:06:06.880294 3607 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f"} Jan 29 12:06:06.880648 kubelet[3607]: E0129 12:06:06.880342 3607 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:06:06.880648 kubelet[3607]: E0129 12:06:06.880429 3607 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e3ae492-9704-4aa3-aacf-00b3ecf4f562\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht95p" podUID="6e3ae492-9704-4aa3-aacf-00b3ecf4f562" Jan 29 12:06:07.008893 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:06:07.010593 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:06:07.459642 kubelet[3607]: I0129 12:06:07.434916 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7g8f" podStartSLOduration=3.202886808 podStartE2EDuration="42.412040946s" podCreationTimestamp="2025-01-29 12:05:25 +0000 UTC" firstStartedPulling="2025-01-29 12:05:26.734835773 +0000 UTC m=+23.167473653" lastFinishedPulling="2025-01-29 12:06:05.943989916 +0000 UTC m=+62.376627791" observedRunningTime="2025-01-29 12:06:07.409501098 +0000 UTC m=+63.842138993" watchObservedRunningTime="2025-01-29 12:06:07.412040946 +0000 UTC m=+63.844678841" Jan 29 12:06:07.550132 systemd[1]: Started sshd@12-172.31.19.14:22-139.178.68.195:52434.service - OpenSSH per-connection server daemon (139.178.68.195:52434). Jan 29 12:06:07.763029 sshd[4843]: Accepted publickey for core from 139.178.68.195 port 52434 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:07.765612 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:07.771700 systemd-logind[1987]: New session 13 of user core. Jan 29 12:06:07.777308 containerd[2010]: time="2025-01-29T12:06:07.776301181Z" level=info msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" Jan 29 12:06:07.779305 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:06:08.043526 sshd[4843]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:08.047322 systemd[1]: sshd@12-172.31.19.14:22-139.178.68.195:52434.service: Deactivated successfully. Jan 29 12:06:08.051956 systemd-logind[1987]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:06:08.053162 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:06:08.056535 systemd-logind[1987]: Removed session 13. Jan 29 12:06:08.419966 systemd[1]: run-containerd-runc-k8s.io-f63eb1f74a8ff373a6dcc459f78fbaad119784785054f4d82d8c2875399e2a2b-runc.EdxmyA.mount: Deactivated successfully. Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.920 [INFO][4866] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.922 [INFO][4866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" iface="eth0" netns="/var/run/netns/cni-7adf5da0-904e-61e0-0cf2-829966a6ed8b" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.924 [INFO][4866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" iface="eth0" netns="/var/run/netns/cni-7adf5da0-904e-61e0-0cf2-829966a6ed8b" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.930 [INFO][4866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" iface="eth0" netns="/var/run/netns/cni-7adf5da0-904e-61e0-0cf2-829966a6ed8b" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.930 [INFO][4866] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:07.930 [INFO][4866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.352 [INFO][4879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.363 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.364 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.397 [WARNING][4879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.397 [INFO][4879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.401 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:08.423149 containerd[2010]: 2025-01-29 12:06:08.406 [INFO][4866] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:06:08.428336 containerd[2010]: time="2025-01-29T12:06:08.425877348Z" level=info msg="TearDown network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" successfully" Jan 29 12:06:08.430760 containerd[2010]: time="2025-01-29T12:06:08.430012326Z" level=info msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" returns successfully" Jan 29 12:06:08.430354 systemd[1]: run-netns-cni\x2d7adf5da0\x2d904e\x2d61e0\x2d0cf2\x2d829966a6ed8b.mount: Deactivated successfully. Jan 29 12:06:08.452437 containerd[2010]: time="2025-01-29T12:06:08.452383806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nm9g,Uid:393c076b-4fd5-42ce-ac5b-7c010e93a9f4,Namespace:kube-system,Attempt:1,}" Jan 29 12:06:08.772076 containerd[2010]: time="2025-01-29T12:06:08.772017960Z" level=info msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" Jan 29 12:06:08.773571 containerd[2010]: time="2025-01-29T12:06:08.772269592Z" level=info msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" Jan 29 12:06:08.947227 (udev-worker)[4784]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:06:08.963526 systemd-networkd[1570]: calic1e99c52a1c: Link UP Jan 29 12:06:08.968056 systemd-networkd[1570]: calic1e99c52a1c: Gained carrier Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.589 [INFO][4914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.624 [INFO][4914] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0 coredns-7db6d8ff4d- kube-system 393c076b-4fd5-42ce-ac5b-7c010e93a9f4 895 0 2025-01-29 12:05:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-14 coredns-7db6d8ff4d-8nm9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1e99c52a1c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.624 [INFO][4914] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.759 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" HandleID="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.806 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" HandleID="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002656b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-14", "pod":"coredns-7db6d8ff4d-8nm9g", "timestamp":"2025-01-29 12:06:08.75779127 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.806 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.806 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.806 [INFO][4965] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.816 [INFO][4965] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.836 [INFO][4965] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.852 [INFO][4965] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.859 [INFO][4965] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.867 [INFO][4965] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.867 [INFO][4965] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.875 [INFO][4965] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.891 [INFO][4965] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.907 [INFO][4965] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.129/26] block=192.168.62.128/26 handle="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.908 [INFO][4965] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.129/26] handle="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" host="ip-172-31-19-14" Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.908 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:09.083679 containerd[2010]: 2025-01-29 12:06:08.908 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.129/26] IPv6=[] ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" HandleID="k8s-pod-network.b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:08.925 [INFO][4914] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"393c076b-4fd5-42ce-ac5b-7c010e93a9f4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"coredns-7db6d8ff4d-8nm9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1e99c52a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:08.925 [INFO][4914] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.129/32] ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:08.926 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1e99c52a1c ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:08.961 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:08.967 [INFO][4914] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"393c076b-4fd5-42ce-ac5b-7c010e93a9f4", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca", Pod:"coredns-7db6d8ff4d-8nm9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1e99c52a1c", MAC:"42:4b:cf:4e:8e:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:09.086383 containerd[2010]: 2025-01-29 12:06:09.022 [INFO][4914] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8nm9g" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:06:09.342003 containerd[2010]: time="2025-01-29T12:06:09.339555277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:09.342003 containerd[2010]: time="2025-01-29T12:06:09.340171820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:09.342003 containerd[2010]: time="2025-01-29T12:06:09.340191914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:09.342003 containerd[2010]: time="2025-01-29T12:06:09.340341535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.003 [INFO][5024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.007 [INFO][5024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" iface="eth0" netns="/var/run/netns/cni-ce016b0b-c349-f4a4-0dd8-d90b380abb17" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.009 [INFO][5024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" iface="eth0" netns="/var/run/netns/cni-ce016b0b-c349-f4a4-0dd8-d90b380abb17" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.018 [INFO][5024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" iface="eth0" netns="/var/run/netns/cni-ce016b0b-c349-f4a4-0dd8-d90b380abb17" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.019 [INFO][5024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.019 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.336 [INFO][5054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.338 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.338 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.353 [WARNING][5054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.353 [INFO][5054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.355 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:09.398897 containerd[2010]: 2025-01-29 12:06:09.367 [INFO][5024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:06:09.401959 containerd[2010]: time="2025-01-29T12:06:09.400826870Z" level=info msg="TearDown network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" successfully" Jan 29 12:06:09.401959 containerd[2010]: time="2025-01-29T12:06:09.400865709Z" level=info msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" returns successfully" Jan 29 12:06:09.416538 systemd[1]: run-netns-cni\x2dce016b0b\x2dc349\x2df4a4\x2d0dd8\x2dd90b380abb17.mount: Deactivated successfully. Jan 29 12:06:09.424715 containerd[2010]: time="2025-01-29T12:06:09.424668774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-bfmd4,Uid:3a4175b0-1c11-4e3d-bc96-94db0994a2b9,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:06:09.508632 systemd[1]: run-containerd-runc-k8s.io-b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca-runc.89OrOx.mount: Deactivated successfully. Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.211 [INFO][5044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.227 [INFO][5044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" iface="eth0" netns="/var/run/netns/cni-13417211-9052-2a7d-3804-a6ea6123d3e0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.228 [INFO][5044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" iface="eth0" netns="/var/run/netns/cni-13417211-9052-2a7d-3804-a6ea6123d3e0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.235 [INFO][5044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" iface="eth0" netns="/var/run/netns/cni-13417211-9052-2a7d-3804-a6ea6123d3e0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.236 [INFO][5044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.236 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.549 [INFO][5073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.563 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.563 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.589 [WARNING][5073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.589 [INFO][5073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.593 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:09.674169 containerd[2010]: 2025-01-29 12:06:09.657 [INFO][5044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:06:09.677290 containerd[2010]: time="2025-01-29T12:06:09.675717610Z" level=info msg="TearDown network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" successfully" Jan 29 12:06:09.677290 containerd[2010]: time="2025-01-29T12:06:09.675768524Z" level=info msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" returns successfully" Jan 29 12:06:09.693038 containerd[2010]: time="2025-01-29T12:06:09.690464256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-mbmww,Uid:4a677651-8370-4ade-886d-e86025868e97,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:06:09.778928 containerd[2010]: time="2025-01-29T12:06:09.778868978Z" level=info msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" Jan 29 12:06:09.798721 containerd[2010]: time="2025-01-29T12:06:09.798329932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nm9g,Uid:393c076b-4fd5-42ce-ac5b-7c010e93a9f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca\"" Jan 29 12:06:09.834455 containerd[2010]: time="2025-01-29T12:06:09.833777112Z" level=info msg="CreateContainer within sandbox \"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:06:09.856751 kernel: bpftool[5196]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:06:09.933922 containerd[2010]: time="2025-01-29T12:06:09.933810317Z" level=info msg="CreateContainer within sandbox \"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a6b73a7e0ad73650d328ecc472a9c8c3bd4674613761a98b8f9df455ecd1fdf\"" Jan 29 12:06:09.938170 containerd[2010]: time="2025-01-29T12:06:09.937391974Z" level=info msg="StartContainer for \"8a6b73a7e0ad73650d328ecc472a9c8c3bd4674613761a98b8f9df455ecd1fdf\"" Jan 29 12:06:10.234245 containerd[2010]: time="2025-01-29T12:06:10.234129238Z" level=info msg="StartContainer for \"8a6b73a7e0ad73650d328ecc472a9c8c3bd4674613761a98b8f9df455ecd1fdf\" returns successfully" Jan 29 12:06:10.297137 systemd-networkd[1570]: cali2af00a38861: Link UP Jan 29 12:06:10.308432 systemd-networkd[1570]: cali2af00a38861: Gained carrier Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:09.997 [INFO][5188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:09.997 [INFO][5188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" iface="eth0" netns="/var/run/netns/cni-a41c60ab-8456-a171-047b-145515625bc8" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:09.999 [INFO][5188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" iface="eth0" netns="/var/run/netns/cni-a41c60ab-8456-a171-047b-145515625bc8" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.000 [INFO][5188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" iface="eth0" netns="/var/run/netns/cni-a41c60ab-8456-a171-047b-145515625bc8" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.000 [INFO][5188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.000 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.166 [INFO][5215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.166 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.260 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.276 [WARNING][5215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.277 [INFO][5215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.281 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:10.337758 containerd[2010]: 2025-01-29 12:06:10.327 [INFO][5188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:06:10.341072 containerd[2010]: time="2025-01-29T12:06:10.338147412Z" level=info msg="TearDown network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" successfully" Jan 29 12:06:10.341072 containerd[2010]: time="2025-01-29T12:06:10.338181845Z" level=info msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" returns successfully" Jan 29 12:06:10.341072 containerd[2010]: time="2025-01-29T12:06:10.339248445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9d9464-548vs,Uid:f814fb92-3f75-45fa-afb5-f59e7f19b575,Namespace:calico-system,Attempt:1,}" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:09.926 [INFO][5139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0 calico-apiserver-5979bcbbd4- calico-apiserver 3a4175b0-1c11-4e3d-bc96-94db0994a2b9 904 0 2025-01-29 12:05:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5979bcbbd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-14 calico-apiserver-5979bcbbd4-bfmd4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2af00a38861 [] []}} ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:09.926 [INFO][5139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.133 [INFO][5201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" HandleID="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.156 [INFO][5201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" HandleID="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c2750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-14", "pod":"calico-apiserver-5979bcbbd4-bfmd4", "timestamp":"2025-01-29 12:06:10.133246677 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.156 [INFO][5201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.157 [INFO][5201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.157 [INFO][5201] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.160 [INFO][5201] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.181 [INFO][5201] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.203 [INFO][5201] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.209 [INFO][5201] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.215 [INFO][5201] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.215 [INFO][5201] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.228 [INFO][5201] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.244 [INFO][5201] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.259 [INFO][5201] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.130/26] block=192.168.62.128/26 handle="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.259 [INFO][5201] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.130/26] handle="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" host="ip-172-31-19-14" Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.259 [INFO][5201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:10.425672 containerd[2010]: 2025-01-29 12:06:10.259 [INFO][5201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.130/26] IPv6=[] ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" HandleID="k8s-pod-network.ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.268 [INFO][5139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a4175b0-1c11-4e3d-bc96-94db0994a2b9", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"calico-apiserver-5979bcbbd4-bfmd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af00a38861", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.271 [INFO][5139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.130/32] ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.271 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2af00a38861 ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.326 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.331 [INFO][5139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a4175b0-1c11-4e3d-bc96-94db0994a2b9", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b", Pod:"calico-apiserver-5979bcbbd4-bfmd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af00a38861", MAC:"ee:25:a4:a0:ab:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:10.459474 containerd[2010]: 2025-01-29 12:06:10.368 [INFO][5139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-bfmd4" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:06:10.439634 systemd[1]: run-netns-cni\x2d13417211\x2d9052\x2d2a7d\x2d3804\x2da6ea6123d3e0.mount: Deactivated successfully. Jan 29 12:06:10.440445 systemd[1]: run-netns-cni\x2da41c60ab\x2d8456\x2da171\x2d047b\x2d145515625bc8.mount: Deactivated successfully. Jan 29 12:06:10.475869 systemd-networkd[1570]: calic1e99c52a1c: Gained IPv6LL Jan 29 12:06:10.614069 (udev-worker)[4783]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:06:10.617241 systemd-networkd[1570]: vxlan.calico: Link UP Jan 29 12:06:10.617247 systemd-networkd[1570]: vxlan.calico: Gained carrier Jan 29 12:06:10.751978 (udev-worker)[5303]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:06:10.787970 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:10.790035 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:10.790069 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:10.847976 systemd-networkd[1570]: cali96c052cd7d3: Link UP Jan 29 12:06:10.852394 systemd-networkd[1570]: cali96c052cd7d3: Gained carrier Jan 29 12:06:10.887893 containerd[2010]: time="2025-01-29T12:06:10.887556540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:10.888089 containerd[2010]: time="2025-01-29T12:06:10.887945817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:10.888985 containerd[2010]: time="2025-01-29T12:06:10.888041467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:10.889426 containerd[2010]: time="2025-01-29T12:06:10.888276493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:10.957824 kubelet[3607]: I0129 12:06:10.952563 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8nm9g" podStartSLOduration=54.952535655 podStartE2EDuration="54.952535655s" podCreationTimestamp="2025-01-29 12:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:06:10.647640903 +0000 UTC m=+67.080278794" watchObservedRunningTime="2025-01-29 12:06:10.952535655 +0000 UTC m=+67.385173550" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:09.981 [INFO][5165] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0 calico-apiserver-5979bcbbd4- calico-apiserver 4a677651-8370-4ade-886d-e86025868e97 906 0 2025-01-29 12:05:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5979bcbbd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-14 calico-apiserver-5979bcbbd4-mbmww eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96c052cd7d3 [] []}} ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:09.982 [INFO][5165] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.205 [INFO][5218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" HandleID="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.238 [INFO][5218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" HandleID="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032e3c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-14", "pod":"calico-apiserver-5979bcbbd4-mbmww", "timestamp":"2025-01-29 12:06:10.205105205 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.239 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.282 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.283 [INFO][5218] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.299 [INFO][5218] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.313 [INFO][5218] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.346 [INFO][5218] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.351 [INFO][5218] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.418 [INFO][5218] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.422 [INFO][5218] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.434 [INFO][5218] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910 Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.642 [INFO][5218] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.794 [INFO][5218] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.131/26] block=192.168.62.128/26 handle="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.794 [INFO][5218] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.131/26] handle="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" host="ip-172-31-19-14" Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.794 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:11.001832 containerd[2010]: 2025-01-29 12:06:10.794 [INFO][5218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.131/26] IPv6=[] ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" HandleID="k8s-pod-network.8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.833 [INFO][5165] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a677651-8370-4ade-886d-e86025868e97", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"calico-apiserver-5979bcbbd4-mbmww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96c052cd7d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.833 [INFO][5165] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.131/32] ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.834 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96c052cd7d3 ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.859 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.862 [INFO][5165] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a677651-8370-4ade-886d-e86025868e97", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910", Pod:"calico-apiserver-5979bcbbd4-mbmww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96c052cd7d3", MAC:"9a:43:c5:af:f5:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:11.002926 containerd[2010]: 2025-01-29 12:06:10.953 [INFO][5165] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcbbd4-mbmww" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:06:11.111890 containerd[2010]: time="2025-01-29T12:06:11.111199485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:11.111890 containerd[2010]: time="2025-01-29T12:06:11.111287622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:11.111890 containerd[2010]: time="2025-01-29T12:06:11.111307667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:11.111890 containerd[2010]: time="2025-01-29T12:06:11.111426277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:11.249770 containerd[2010]: time="2025-01-29T12:06:11.249607758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-bfmd4,Uid:3a4175b0-1c11-4e3d-bc96-94db0994a2b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b\"" Jan 29 12:06:11.259827 containerd[2010]: time="2025-01-29T12:06:11.258253089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:06:11.377198 systemd-networkd[1570]: calicf371032d1a: Link UP Jan 29 12:06:11.380900 systemd-networkd[1570]: calicf371032d1a: Gained carrier Jan 29 12:06:11.432586 containerd[2010]: time="2025-01-29T12:06:11.432032880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcbbd4-mbmww,Uid:4a677651-8370-4ade-886d-e86025868e97,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910\"" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.042 [INFO][5298] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0 calico-kube-controllers-f7c9d9464- calico-system f814fb92-3f75-45fa-afb5-f59e7f19b575 912 0 2025-01-29 12:05:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7c9d9464 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-14 calico-kube-controllers-f7c9d9464-548vs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf371032d1a [] []}} ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.042 [INFO][5298] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.213 [INFO][5368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" HandleID="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.246 [INFO][5368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" HandleID="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec110), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-14", "pod":"calico-kube-controllers-f7c9d9464-548vs", "timestamp":"2025-01-29 12:06:11.213123437 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.247 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.247 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.247 [INFO][5368] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.250 [INFO][5368] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.270 [INFO][5368] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.282 [INFO][5368] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.286 [INFO][5368] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.297 [INFO][5368] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.297 [INFO][5368] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.301 [INFO][5368] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.321 [INFO][5368] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.341 [INFO][5368] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.132/26] block=192.168.62.128/26 handle="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.341 [INFO][5368] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.132/26] handle="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" host="ip-172-31-19-14" Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.342 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:11.456983 containerd[2010]: 2025-01-29 12:06:11.342 [INFO][5368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.132/26] IPv6=[] ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" HandleID="k8s-pod-network.7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.365 [INFO][5298] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0", GenerateName:"calico-kube-controllers-f7c9d9464-", Namespace:"calico-system", SelfLink:"", UID:"f814fb92-3f75-45fa-afb5-f59e7f19b575", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9d9464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"calico-kube-controllers-f7c9d9464-548vs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf371032d1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.365 [INFO][5298] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.132/32] ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.365 [INFO][5298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf371032d1a ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.386 [INFO][5298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.394 [INFO][5298] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0", GenerateName:"calico-kube-controllers-f7c9d9464-", Namespace:"calico-system", SelfLink:"", UID:"f814fb92-3f75-45fa-afb5-f59e7f19b575", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9d9464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce", Pod:"calico-kube-controllers-f7c9d9464-548vs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf371032d1a", MAC:"52:9d:25:57:a1:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:11.465687 containerd[2010]: 2025-01-29 12:06:11.442 [INFO][5298] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce" Namespace="calico-system" Pod="calico-kube-controllers-f7c9d9464-548vs" WorkloadEndpoint="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:06:11.551956 containerd[2010]: time="2025-01-29T12:06:11.551413001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:11.551956 containerd[2010]: time="2025-01-29T12:06:11.551493642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:11.551956 containerd[2010]: time="2025-01-29T12:06:11.551511623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:11.551956 containerd[2010]: time="2025-01-29T12:06:11.551625560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:11.733151 containerd[2010]: time="2025-01-29T12:06:11.732773866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9d9464-548vs,Uid:f814fb92-3f75-45fa-afb5-f59e7f19b575,Namespace:calico-system,Attempt:1,} returns sandbox id \"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce\"" Jan 29 12:06:11.825790 systemd-networkd[1570]: cali2af00a38861: Gained IPv6LL Jan 29 12:06:12.513010 systemd-networkd[1570]: vxlan.calico: Gained IPv6LL Jan 29 12:06:12.578938 systemd-networkd[1570]: calicf371032d1a: Gained IPv6LL Jan 29 12:06:12.898891 systemd-networkd[1570]: cali96c052cd7d3: Gained IPv6LL Jan 29 12:06:13.086760 systemd[1]: Started sshd@13-172.31.19.14:22-139.178.68.195:52450.service - OpenSSH per-connection server daemon (139.178.68.195:52450). Jan 29 12:06:13.324815 sshd[5510]: Accepted publickey for core from 139.178.68.195 port 52450 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:13.328292 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:13.344079 systemd-logind[1987]: New session 14 of user core. Jan 29 12:06:13.352073 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:06:13.918414 sshd[5510]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:13.927059 systemd-logind[1987]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:06:13.927580 systemd[1]: sshd@13-172.31.19.14:22-139.178.68.195:52450.service: Deactivated successfully. Jan 29 12:06:13.937652 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:06:13.941385 systemd-logind[1987]: Removed session 14. Jan 29 12:06:15.051560 containerd[2010]: time="2025-01-29T12:06:15.051508867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:15.053736 containerd[2010]: time="2025-01-29T12:06:15.053676025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:06:15.058736 containerd[2010]: time="2025-01-29T12:06:15.058304012Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:15.066427 containerd[2010]: time="2025-01-29T12:06:15.066370016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:15.067688 containerd[2010]: time="2025-01-29T12:06:15.067645711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.807503229s" Jan 29 12:06:15.067880 containerd[2010]: time="2025-01-29T12:06:15.067859049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:06:15.070989 containerd[2010]: time="2025-01-29T12:06:15.070949522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:06:15.085893 containerd[2010]: time="2025-01-29T12:06:15.085853715Z" level=info msg="CreateContainer within sandbox \"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:06:15.106050 ntpd[1964]: Listen normally on 6 vxlan.calico 192.168.62.128:123 Jan 29 12:06:15.112421 containerd[2010]: time="2025-01-29T12:06:15.110190735Z" level=info msg="CreateContainer within sandbox \"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"543a1093e5b1d28ae561431cd42a2a728b3334bd02674a3c65c0c73c0954daee\"" Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 6 vxlan.calico 192.168.62.128:123 Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 7 calic1e99c52a1c [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 8 cali2af00a38861 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 9 vxlan.calico [fe80::6452:83ff:fe24:1c0a%6]:123 Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 10 cali96c052cd7d3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:06:15.112649 ntpd[1964]: 29 Jan 12:06:15 ntpd[1964]: Listen normally on 11 calicf371032d1a [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:06:15.106137 ntpd[1964]: Listen normally on 7 calic1e99c52a1c [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 12:06:15.106197 ntpd[1964]: Listen normally on 8 cali2af00a38861 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 29 12:06:15.106241 ntpd[1964]: Listen normally on 9 vxlan.calico [fe80::6452:83ff:fe24:1c0a%6]:123 Jan 29 12:06:15.106282 ntpd[1964]: Listen normally on 10 cali96c052cd7d3 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:06:15.106318 ntpd[1964]: Listen normally on 11 calicf371032d1a [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:06:15.124917 containerd[2010]: time="2025-01-29T12:06:15.124341949Z" level=info msg="StartContainer for \"543a1093e5b1d28ae561431cd42a2a728b3334bd02674a3c65c0c73c0954daee\"" Jan 29 12:06:15.339704 containerd[2010]: time="2025-01-29T12:06:15.339262172Z" level=info msg="StartContainer for \"543a1093e5b1d28ae561431cd42a2a728b3334bd02674a3c65c0c73c0954daee\" returns successfully" Jan 29 12:06:15.462261 containerd[2010]: time="2025-01-29T12:06:15.460762810Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:15.468027 containerd[2010]: time="2025-01-29T12:06:15.467967365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:06:15.496920 containerd[2010]: time="2025-01-29T12:06:15.495932240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 424.940458ms" Jan 29 12:06:15.496920 containerd[2010]: time="2025-01-29T12:06:15.496925424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:06:15.498371 containerd[2010]: time="2025-01-29T12:06:15.498341162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:06:15.513480 containerd[2010]: time="2025-01-29T12:06:15.513303819Z" level=info msg="CreateContainer within sandbox \"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:06:15.546925 containerd[2010]: time="2025-01-29T12:06:15.546719826Z" level=info msg="CreateContainer within sandbox \"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4585f32f0aed1c460fc70e3ea258292d5a01c38660533fd27fe4803f3c06c1d\"" Jan 29 12:06:15.562082 containerd[2010]: time="2025-01-29T12:06:15.558791100Z" level=info msg="StartContainer for \"c4585f32f0aed1c460fc70e3ea258292d5a01c38660533fd27fe4803f3c06c1d\"" Jan 29 12:06:15.825159 containerd[2010]: time="2025-01-29T12:06:15.825018345Z" level=info msg="StartContainer for \"c4585f32f0aed1c460fc70e3ea258292d5a01c38660533fd27fe4803f3c06c1d\" returns successfully" Jan 29 12:06:16.481815 kubelet[3607]: I0129 12:06:16.481768 3607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:06:16.508899 kubelet[3607]: I0129 12:06:16.506547 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5979bcbbd4-bfmd4" podStartSLOduration=47.693772449 podStartE2EDuration="51.506521499s" podCreationTimestamp="2025-01-29 12:05:25 +0000 UTC" firstStartedPulling="2025-01-29 12:06:11.257335861 +0000 UTC m=+67.689973743" lastFinishedPulling="2025-01-29 12:06:15.070084906 +0000 UTC m=+71.502722793" observedRunningTime="2025-01-29 12:06:15.51471491 +0000 UTC m=+71.947352799" watchObservedRunningTime="2025-01-29 12:06:16.506521499 +0000 UTC m=+72.939159393" Jan 29 12:06:16.802180 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:16.803828 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:16.802216 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:17.808631 kubelet[3607]: I0129 12:06:17.808562 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5979bcbbd4-mbmww" podStartSLOduration=48.755655334 podStartE2EDuration="52.808539383s" podCreationTimestamp="2025-01-29 12:05:25 +0000 UTC" firstStartedPulling="2025-01-29 12:06:11.445319684 +0000 UTC m=+67.877957571" lastFinishedPulling="2025-01-29 12:06:15.498203735 +0000 UTC m=+71.930841620" observedRunningTime="2025-01-29 12:06:16.513000706 +0000 UTC m=+72.945638604" watchObservedRunningTime="2025-01-29 12:06:17.808539383 +0000 UTC m=+74.241177282" Jan 29 12:06:18.770777 containerd[2010]: time="2025-01-29T12:06:18.770614703Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:06:18.952143 systemd[1]: Started sshd@14-172.31.19.14:22-139.178.68.195:46730.service - OpenSSH per-connection server daemon (139.178.68.195:46730). Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.932 [INFO][5656] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.934 [INFO][5656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" iface="eth0" netns="/var/run/netns/cni-2fe9dca6-0967-9941-706a-5e78f36ca6e0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.936 [INFO][5656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" iface="eth0" netns="/var/run/netns/cni-2fe9dca6-0967-9941-706a-5e78f36ca6e0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.952 [INFO][5656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" iface="eth0" netns="/var/run/netns/cni-2fe9dca6-0967-9941-706a-5e78f36ca6e0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.955 [INFO][5656] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:18.955 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.125 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.125 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.125 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.141 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.141 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.161 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:19.183122 containerd[2010]: 2025-01-29 12:06:19.166 [INFO][5656] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:06:19.184737 containerd[2010]: time="2025-01-29T12:06:19.183809297Z" level=info msg="TearDown network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" successfully" Jan 29 12:06:19.184737 containerd[2010]: time="2025-01-29T12:06:19.183845333Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" returns successfully" Jan 29 12:06:19.187809 containerd[2010]: time="2025-01-29T12:06:19.187730940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht95p,Uid:6e3ae492-9704-4aa3-aacf-00b3ecf4f562,Namespace:calico-system,Attempt:1,}" Jan 29 12:06:19.199525 systemd[1]: run-netns-cni\x2d2fe9dca6\x2d0967\x2d9941\x2d706a\x2d5e78f36ca6e0.mount: Deactivated successfully. Jan 29 12:06:19.242287 sshd[5662]: Accepted publickey for core from 139.178.68.195 port 46730 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:19.240309 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:19.273574 systemd-logind[1987]: New session 15 of user core. Jan 29 12:06:19.280561 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:06:19.728932 containerd[2010]: time="2025-01-29T12:06:19.727372569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:19.732365 containerd[2010]: time="2025-01-29T12:06:19.732321176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:06:19.735866 containerd[2010]: time="2025-01-29T12:06:19.735826617Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:19.744140 containerd[2010]: time="2025-01-29T12:06:19.744089926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:19.745707 containerd[2010]: time="2025-01-29T12:06:19.744861842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.246319887s" Jan 29 12:06:19.745707 containerd[2010]: time="2025-01-29T12:06:19.744909279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:06:19.808336 containerd[2010]: time="2025-01-29T12:06:19.802962119Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:06:19.828515 containerd[2010]: time="2025-01-29T12:06:19.828317059Z" level=info msg="CreateContainer within sandbox \"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:06:19.952245 systemd-networkd[1570]: cali24123374334: Link UP Jan 29 12:06:19.952582 systemd-networkd[1570]: cali24123374334: Gained carrier Jan 29 12:06:19.993288 (udev-worker)[5721]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.698 [INFO][5677] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0 csi-node-driver- calico-system 6e3ae492-9704-4aa3-aacf-00b3ecf4f562 986 0 2025-01-29 12:05:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-14 csi-node-driver-ht95p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali24123374334 [] []}} ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.698 [INFO][5677] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.762 [INFO][5692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" HandleID="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.781 [INFO][5692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" HandleID="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-14", "pod":"csi-node-driver-ht95p", "timestamp":"2025-01-29 12:06:19.762566117 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.781 [INFO][5692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.781 [INFO][5692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.781 [INFO][5692] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.786 [INFO][5692] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.810 [INFO][5692] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.825 [INFO][5692] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.833 [INFO][5692] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.845 [INFO][5692] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.845 [INFO][5692] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.849 [INFO][5692] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.864 [INFO][5692] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.888 [INFO][5692] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.133/26] block=192.168.62.128/26 handle="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.894 [INFO][5692] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.133/26] handle="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" host="ip-172-31-19-14" Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.895 [INFO][5692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:20.036490 containerd[2010]: 2025-01-29 12:06:19.896 [INFO][5692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.133/26] IPv6=[] ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" HandleID="k8s-pod-network.b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.918 [INFO][5677] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e3ae492-9704-4aa3-aacf-00b3ecf4f562", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"csi-node-driver-ht95p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24123374334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.942 [INFO][5677] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.133/32] ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.942 [INFO][5677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24123374334 ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.947 [INFO][5677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.948 [INFO][5677] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e3ae492-9704-4aa3-aacf-00b3ecf4f562", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f", Pod:"csi-node-driver-ht95p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24123374334", MAC:"ee:34:20:7f:be:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:20.054116 containerd[2010]: 2025-01-29 12:06:19.978 [INFO][5677] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f" Namespace="calico-system" Pod="csi-node-driver-ht95p" WorkloadEndpoint="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:06:20.054116 containerd[2010]: time="2025-01-29T12:06:20.044938186Z" level=info msg="CreateContainer within sandbox \"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"98819270084f9e5cbeb3759b230d12245f549959864ff3ec950ead1002082478\"" Jan 29 12:06:20.071987 containerd[2010]: time="2025-01-29T12:06:20.065443209Z" level=info msg="StartContainer for \"98819270084f9e5cbeb3759b230d12245f549959864ff3ec950ead1002082478\"" Jan 29 12:06:20.241275 containerd[2010]: time="2025-01-29T12:06:20.240623064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:20.241275 containerd[2010]: time="2025-01-29T12:06:20.240778799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:20.241275 containerd[2010]: time="2025-01-29T12:06:20.240823724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:20.241275 containerd[2010]: time="2025-01-29T12:06:20.240939318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.253 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.254 [INFO][5715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" iface="eth0" netns="/var/run/netns/cni-06c6cfdc-8217-1969-4107-e38b796293ca" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.256 [INFO][5715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" iface="eth0" netns="/var/run/netns/cni-06c6cfdc-8217-1969-4107-e38b796293ca" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.259 [INFO][5715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" iface="eth0" netns="/var/run/netns/cni-06c6cfdc-8217-1969-4107-e38b796293ca" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.259 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.261 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.389 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.390 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.390 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.417 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.417 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.423 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:20.451356 containerd[2010]: 2025-01-29 12:06:20.442 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:06:20.452138 containerd[2010]: time="2025-01-29T12:06:20.451877424Z" level=info msg="TearDown network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" successfully" Jan 29 12:06:20.452138 containerd[2010]: time="2025-01-29T12:06:20.451941394Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" returns successfully" Jan 29 12:06:20.480553 containerd[2010]: time="2025-01-29T12:06:20.480374458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6glqj,Uid:bbdf3301-3539-4945-83b3-f31451672e0c,Namespace:kube-system,Attempt:1,}" Jan 29 12:06:20.482067 containerd[2010]: time="2025-01-29T12:06:20.481932636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht95p,Uid:6e3ae492-9704-4aa3-aacf-00b3ecf4f562,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f\"" Jan 29 12:06:20.582251 systemd[1]: run-netns-cni\x2d06c6cfdc\x2d8217\x2d1969\x2d4107\x2de38b796293ca.mount: Deactivated successfully. Jan 29 12:06:20.600523 containerd[2010]: time="2025-01-29T12:06:20.600367435Z" level=info msg="StartContainer for \"98819270084f9e5cbeb3759b230d12245f549959864ff3ec950ead1002082478\" returns successfully" Jan 29 12:06:20.643509 containerd[2010]: time="2025-01-29T12:06:20.643467940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:06:20.662840 sshd[5662]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:20.691933 systemd[1]: Started sshd@15-172.31.19.14:22-139.178.68.195:46742.service - OpenSSH per-connection server daemon (139.178.68.195:46742). Jan 29 12:06:20.694512 systemd[1]: sshd@14-172.31.19.14:22-139.178.68.195:46730.service: Deactivated successfully. Jan 29 12:06:20.705092 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:06:20.712615 systemd-logind[1987]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:06:20.734849 systemd-logind[1987]: Removed session 15. Jan 29 12:06:20.801572 systemd[1]: run-containerd-runc-k8s.io-98819270084f9e5cbeb3759b230d12245f549959864ff3ec950ead1002082478-runc.dUrGCy.mount: Deactivated successfully. Jan 29 12:06:20.898301 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:20.902274 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:20.898333 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:20.911830 sshd[5834]: Accepted publickey for core from 139.178.68.195 port 46742 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:20.915707 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:20.949544 systemd-logind[1987]: New session 16 of user core. Jan 29 12:06:20.953161 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:06:21.007507 kubelet[3607]: I0129 12:06:21.006520 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f7c9d9464-548vs" podStartSLOduration=46.992432295 podStartE2EDuration="55.006489394s" podCreationTimestamp="2025-01-29 12:05:26 +0000 UTC" firstStartedPulling="2025-01-29 12:06:11.735922461 +0000 UTC m=+68.168560341" lastFinishedPulling="2025-01-29 12:06:19.749979555 +0000 UTC m=+76.182617440" observedRunningTime="2025-01-29 12:06:20.674321762 +0000 UTC m=+77.106959655" watchObservedRunningTime="2025-01-29 12:06:21.006489394 +0000 UTC m=+77.439127288" Jan 29 12:06:21.089034 systemd-networkd[1570]: cali24123374334: Gained IPv6LL Jan 29 12:06:21.148900 (udev-worker)[5732]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:06:21.150942 systemd-networkd[1570]: calia78caeeaafb: Link UP Jan 29 12:06:21.153383 systemd-networkd[1570]: calia78caeeaafb: Gained carrier Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:20.851 [INFO][5823] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0 coredns-7db6d8ff4d- kube-system bbdf3301-3539-4945-83b3-f31451672e0c 1000 0 2025-01-29 12:05:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-14 coredns-7db6d8ff4d-6glqj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia78caeeaafb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:20.855 [INFO][5823] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:20.995 [INFO][5859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" HandleID="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.049 [INFO][5859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" HandleID="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000446fc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-14", "pod":"coredns-7db6d8ff4d-6glqj", "timestamp":"2025-01-29 12:06:20.995493287 +0000 UTC"}, Hostname:"ip-172-31-19-14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.050 [INFO][5859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.052 [INFO][5859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.052 [INFO][5859] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-14' Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.063 [INFO][5859] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.072 [INFO][5859] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.081 [INFO][5859] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.088 [INFO][5859] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.100 [INFO][5859] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.100 [INFO][5859] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.103 [INFO][5859] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6 Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.118 [INFO][5859] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.133 [INFO][5859] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.134/26] block=192.168.62.128/26 handle="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.133 [INFO][5859] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.134/26] handle="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" host="ip-172-31-19-14" Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.133 [INFO][5859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:06:21.189297 containerd[2010]: 2025-01-29 12:06:21.133 [INFO][5859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.134/26] IPv6=[] ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" HandleID="k8s-pod-network.c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.142 [INFO][5823] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bbdf3301-3539-4945-83b3-f31451672e0c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"", Pod:"coredns-7db6d8ff4d-6glqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia78caeeaafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.142 [INFO][5823] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.134/32] ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.142 [INFO][5823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia78caeeaafb ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.152 [INFO][5823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.154 [INFO][5823] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bbdf3301-3539-4945-83b3-f31451672e0c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6", Pod:"coredns-7db6d8ff4d-6glqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia78caeeaafb", MAC:"4e:b8:a5:ea:b8:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:06:21.192636 containerd[2010]: 2025-01-29 12:06:21.180 [INFO][5823] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6glqj" WorkloadEndpoint="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:06:21.247035 containerd[2010]: time="2025-01-29T12:06:21.245928072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:21.248242 containerd[2010]: time="2025-01-29T12:06:21.247597698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:21.248438 containerd[2010]: time="2025-01-29T12:06:21.248069187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:21.248850 containerd[2010]: time="2025-01-29T12:06:21.248759527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:21.378903 containerd[2010]: time="2025-01-29T12:06:21.378854534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6glqj,Uid:bbdf3301-3539-4945-83b3-f31451672e0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6\"" Jan 29 12:06:21.401657 containerd[2010]: time="2025-01-29T12:06:21.400288284Z" level=info msg="CreateContainer within sandbox \"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:06:21.460207 containerd[2010]: time="2025-01-29T12:06:21.460146632Z" level=info msg="CreateContainer within sandbox \"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51e345731ccf6fd5102a6a0a7646cf0781201ff142148407d6d2e26851cee5e6\"" Jan 29 12:06:21.461189 containerd[2010]: time="2025-01-29T12:06:21.461151465Z" level=info msg="StartContainer for \"51e345731ccf6fd5102a6a0a7646cf0781201ff142148407d6d2e26851cee5e6\"" Jan 29 12:06:21.590821 containerd[2010]: time="2025-01-29T12:06:21.590443682Z" level=info msg="StartContainer for \"51e345731ccf6fd5102a6a0a7646cf0781201ff142148407d6d2e26851cee5e6\" returns successfully" Jan 29 12:06:21.955565 sshd[5834]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:21.968079 systemd-logind[1987]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:06:21.969281 systemd[1]: sshd@15-172.31.19.14:22-139.178.68.195:46742.service: Deactivated successfully. Jan 29 12:06:21.999496 systemd[1]: Started sshd@16-172.31.19.14:22-139.178.68.195:46758.service - OpenSSH per-connection server daemon (139.178.68.195:46758). Jan 29 12:06:22.000345 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:06:22.004254 systemd-logind[1987]: Removed session 16. Jan 29 12:06:22.269063 sshd[5969]: Accepted publickey for core from 139.178.68.195 port 46758 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:22.271795 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:22.280931 systemd-logind[1987]: New session 17 of user core. Jan 29 12:06:22.286525 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:06:22.369276 systemd-networkd[1570]: calia78caeeaafb: Gained IPv6LL Jan 29 12:06:22.480228 containerd[2010]: time="2025-01-29T12:06:22.480168224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:22.483073 containerd[2010]: time="2025-01-29T12:06:22.482979869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:06:22.486763 containerd[2010]: time="2025-01-29T12:06:22.486726000Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:22.488962 containerd[2010]: time="2025-01-29T12:06:22.488901165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:22.492588 containerd[2010]: time="2025-01-29T12:06:22.492337888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.848645111s" Jan 29 12:06:22.492588 containerd[2010]: time="2025-01-29T12:06:22.492382622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:06:22.498278 containerd[2010]: time="2025-01-29T12:06:22.498104609Z" level=info msg="CreateContainer within sandbox \"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:06:22.565271 containerd[2010]: time="2025-01-29T12:06:22.564925293Z" level=info msg="CreateContainer within sandbox \"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a3a31a782a959ee9410ab0ebe93a0b720ac9aa09f7f9eb0cb293e4ac99c6da9\"" Jan 29 12:06:22.566605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461322300.mount: Deactivated successfully. Jan 29 12:06:22.596487 containerd[2010]: time="2025-01-29T12:06:22.596430032Z" level=info msg="StartContainer for \"8a3a31a782a959ee9410ab0ebe93a0b720ac9aa09f7f9eb0cb293e4ac99c6da9\"" Jan 29 12:06:22.678286 kubelet[3607]: I0129 12:06:22.677684 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6glqj" podStartSLOduration=66.677544491 podStartE2EDuration="1m6.677544491s" podCreationTimestamp="2025-01-29 12:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:06:21.735732022 +0000 UTC m=+78.168369918" watchObservedRunningTime="2025-01-29 12:06:22.677544491 +0000 UTC m=+79.110182384" Jan 29 12:06:22.809234 kubelet[3607]: I0129 12:06:22.809188 3607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:06:22.967998 containerd[2010]: time="2025-01-29T12:06:22.967950522Z" level=info msg="StartContainer for \"8a3a31a782a959ee9410ab0ebe93a0b720ac9aa09f7f9eb0cb293e4ac99c6da9\" returns successfully" Jan 29 12:06:22.975503 containerd[2010]: time="2025-01-29T12:06:22.975123682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:06:24.809991 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:24.802635 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:24.804233 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:25.106077 ntpd[1964]: Listen normally on 12 cali24123374334 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:06:25.108268 ntpd[1964]: 29 Jan 12:06:25 ntpd[1964]: Listen normally on 12 cali24123374334 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:06:25.108268 ntpd[1964]: 29 Jan 12:06:25 ntpd[1964]: Listen normally on 13 calia78caeeaafb [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:06:25.106158 ntpd[1964]: Listen normally on 13 calia78caeeaafb [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:06:25.575127 containerd[2010]: time="2025-01-29T12:06:25.574950939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:25.578221 containerd[2010]: time="2025-01-29T12:06:25.578150043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:06:25.586909 containerd[2010]: time="2025-01-29T12:06:25.586860438Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:25.799208 containerd[2010]: time="2025-01-29T12:06:25.799155133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.823865701s" Jan 29 12:06:25.799208 containerd[2010]: time="2025-01-29T12:06:25.799208251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:06:25.806002 containerd[2010]: time="2025-01-29T12:06:25.805946117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:26.118035 containerd[2010]: time="2025-01-29T12:06:26.117994394Z" level=info msg="CreateContainer within sandbox \"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:06:26.189265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925585220.mount: Deactivated successfully. Jan 29 12:06:26.200359 containerd[2010]: time="2025-01-29T12:06:26.200137117Z" level=info msg="CreateContainer within sandbox \"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e79aa85bd73040142d6467dad3abfe2254b0c85b9d95b496790ade2c253be18\"" Jan 29 12:06:26.202514 containerd[2010]: time="2025-01-29T12:06:26.202480952Z" level=info msg="StartContainer for \"0e79aa85bd73040142d6467dad3abfe2254b0c85b9d95b496790ade2c253be18\"" Jan 29 12:06:26.854417 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:26.854122 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:26.854155 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:26.923651 containerd[2010]: time="2025-01-29T12:06:26.918934340Z" level=info msg="StartContainer for \"0e79aa85bd73040142d6467dad3abfe2254b0c85b9d95b496790ade2c253be18\" returns successfully" Jan 29 12:06:28.649984 sshd[5969]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:28.715083 systemd[1]: sshd@16-172.31.19.14:22-139.178.68.195:46758.service: Deactivated successfully. Jan 29 12:06:28.754441 systemd[1]: Started sshd@17-172.31.19.14:22-139.178.68.195:38616.service - OpenSSH per-connection server daemon (139.178.68.195:38616). Jan 29 12:06:28.754936 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:06:28.757884 systemd-logind[1987]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:06:28.785885 systemd-logind[1987]: Removed session 17. Jan 29 12:06:28.900052 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:28.905645 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:28.906205 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:29.300431 sshd[6100]: Accepted publickey for core from 139.178.68.195 port 38616 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:29.309953 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:29.331265 systemd-logind[1987]: New session 18 of user core. Jan 29 12:06:29.335880 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:06:29.829839 kubelet[3607]: E0129 12:06:29.814647 3607 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.236s" Jan 29 12:06:29.956371 kubelet[3607]: I0129 12:06:29.945556 3607 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:06:29.956371 kubelet[3607]: I0129 12:06:29.956216 3607 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:06:30.378529 kubelet[3607]: I0129 12:06:30.373219 3607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ht95p" podStartSLOduration=58.913613413 podStartE2EDuration="1m4.355078224s" podCreationTimestamp="2025-01-29 12:05:26 +0000 UTC" firstStartedPulling="2025-01-29 12:06:20.496030425 +0000 UTC m=+76.928668303" lastFinishedPulling="2025-01-29 12:06:25.937495227 +0000 UTC m=+82.370133114" observedRunningTime="2025-01-29 12:06:30.287644462 +0000 UTC m=+86.720282355" watchObservedRunningTime="2025-01-29 12:06:30.355078224 +0000 UTC m=+86.787716118" Jan 29 12:06:30.947123 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:30.948830 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:30.948841 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:31.228911 sshd[6100]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:31.245086 systemd[1]: sshd@17-172.31.19.14:22-139.178.68.195:38616.service: Deactivated successfully. Jan 29 12:06:31.267307 systemd-logind[1987]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:06:31.308751 systemd[1]: Started sshd@18-172.31.19.14:22-139.178.68.195:38620.service - OpenSSH per-connection server daemon (139.178.68.195:38620). Jan 29 12:06:31.315162 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:06:31.373220 systemd-logind[1987]: Removed session 18. Jan 29 12:06:31.683128 sshd[6114]: Accepted publickey for core from 139.178.68.195 port 38620 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:31.688257 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:31.707896 systemd-logind[1987]: New session 19 of user core. Jan 29 12:06:31.713934 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:06:32.082920 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:32.088082 systemd[1]: sshd@18-172.31.19.14:22-139.178.68.195:38620.service: Deactivated successfully. Jan 29 12:06:32.093471 systemd-logind[1987]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:06:32.093668 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:06:32.098261 systemd-logind[1987]: Removed session 19. Jan 29 12:06:32.995195 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:06:32.994041 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:06:32.994048 systemd-resolved[1893]: Flushed all caches. Jan 29 12:06:37.116546 systemd[1]: Started sshd@19-172.31.19.14:22-139.178.68.195:50896.service - OpenSSH per-connection server daemon (139.178.68.195:50896). Jan 29 12:06:37.367019 sshd[6138]: Accepted publickey for core from 139.178.68.195 port 50896 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:37.373901 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:37.388697 systemd-logind[1987]: New session 20 of user core. Jan 29 12:06:37.395094 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:06:37.774380 sshd[6138]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:37.782414 systemd[1]: sshd@19-172.31.19.14:22-139.178.68.195:50896.service: Deactivated successfully. Jan 29 12:06:37.793927 systemd-logind[1987]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:06:37.794319 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:06:37.798287 systemd-logind[1987]: Removed session 20. Jan 29 12:06:42.813701 systemd[1]: Started sshd@20-172.31.19.14:22-139.178.68.195:50906.service - OpenSSH per-connection server daemon (139.178.68.195:50906). Jan 29 12:06:43.004657 sshd[6154]: Accepted publickey for core from 139.178.68.195 port 50906 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:43.005789 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:43.020030 systemd-logind[1987]: New session 21 of user core. Jan 29 12:06:43.033486 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:06:43.295325 sshd[6154]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:43.305472 systemd[1]: sshd@20-172.31.19.14:22-139.178.68.195:50906.service: Deactivated successfully. Jan 29 12:06:43.332449 systemd-logind[1987]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:06:43.333018 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:06:43.335856 systemd-logind[1987]: Removed session 21. Jan 29 12:06:48.328693 systemd[1]: Started sshd@21-172.31.19.14:22-139.178.68.195:57506.service - OpenSSH per-connection server daemon (139.178.68.195:57506). Jan 29 12:06:48.531213 sshd[6171]: Accepted publickey for core from 139.178.68.195 port 57506 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:48.533821 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:48.540547 systemd-logind[1987]: New session 22 of user core. Jan 29 12:06:48.550133 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:06:48.931459 sshd[6171]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:48.960017 systemd[1]: sshd@21-172.31.19.14:22-139.178.68.195:57506.service: Deactivated successfully. Jan 29 12:06:48.977195 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:06:48.991000 systemd-logind[1987]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:06:48.998734 systemd-logind[1987]: Removed session 22. Jan 29 12:06:53.990557 systemd[1]: Started sshd@22-172.31.19.14:22-139.178.68.195:57508.service - OpenSSH per-connection server daemon (139.178.68.195:57508). Jan 29 12:06:54.208707 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 57508 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:54.212329 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:54.221463 systemd-logind[1987]: New session 23 of user core. Jan 29 12:06:54.228996 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:06:54.488168 sshd[6231]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:54.492208 systemd[1]: sshd@22-172.31.19.14:22-139.178.68.195:57508.service: Deactivated successfully. Jan 29 12:06:54.498820 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:06:54.501187 systemd-logind[1987]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:06:54.502719 systemd-logind[1987]: Removed session 23. Jan 29 12:06:59.519300 systemd[1]: Started sshd@23-172.31.19.14:22-139.178.68.195:36418.service - OpenSSH per-connection server daemon (139.178.68.195:36418). Jan 29 12:06:59.698326 sshd[6244]: Accepted publickey for core from 139.178.68.195 port 36418 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:59.700180 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:59.705886 systemd-logind[1987]: New session 24 of user core. Jan 29 12:06:59.711119 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:06:59.948427 sshd[6244]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:59.958231 systemd[1]: sshd@23-172.31.19.14:22-139.178.68.195:36418.service: Deactivated successfully. Jan 29 12:06:59.965794 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:06:59.969436 systemd-logind[1987]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:06:59.971661 systemd-logind[1987]: Removed session 24. Jan 29 12:07:02.085120 systemd[1]: run-containerd-runc-k8s.io-98819270084f9e5cbeb3759b230d12245f549959864ff3ec950ead1002082478-runc.M80dVu.mount: Deactivated successfully. Jan 29 12:07:04.150565 containerd[2010]: time="2025-01-29T12:07:04.141155562Z" level=info msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" Jan 29 12:07:04.982151 systemd[1]: Started sshd@24-172.31.19.14:22-139.178.68.195:36106.service - OpenSSH per-connection server daemon (139.178.68.195:36106). Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:04.658 [WARNING][6291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a4175b0-1c11-4e3d-bc96-94db0994a2b9", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b", Pod:"calico-apiserver-5979bcbbd4-bfmd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af00a38861", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:04.665 [INFO][6291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:04.665 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" iface="eth0" netns="" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:04.665 [INFO][6291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:04.666 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.073 [INFO][6297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.082 [INFO][6297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.083 [INFO][6297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.104 [WARNING][6297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.104 [INFO][6297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.108 [INFO][6297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:05.117221 containerd[2010]: 2025-01-29 12:07:05.111 [INFO][6291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.118215 containerd[2010]: time="2025-01-29T12:07:05.118162071Z" level=info msg="TearDown network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" successfully" Jan 29 12:07:05.118215 containerd[2010]: time="2025-01-29T12:07:05.118203826Z" level=info msg="StopPodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" returns successfully" Jan 29 12:07:05.256417 sshd[6302]: Accepted publickey for core from 139.178.68.195 port 36106 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:07:05.259946 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:05.271502 containerd[2010]: time="2025-01-29T12:07:05.269614184Z" level=info msg="RemovePodSandbox for \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" Jan 29 12:07:05.270582 systemd-logind[1987]: New session 25 of user core. Jan 29 12:07:05.277015 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:07:05.283762 containerd[2010]: time="2025-01-29T12:07:05.283509143Z" level=info msg="Forcibly stopping sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\"" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.383 [WARNING][6320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a4175b0-1c11-4e3d-bc96-94db0994a2b9", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"ee6503568998f39970532573bf35247e1e59a6a16f823a2facb4e5a47f25a33b", Pod:"calico-apiserver-5979bcbbd4-bfmd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af00a38861", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.384 [INFO][6320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.384 [INFO][6320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" iface="eth0" netns="" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.384 [INFO][6320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.384 [INFO][6320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.483 [INFO][6326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.486 [INFO][6326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.486 [INFO][6326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.511 [WARNING][6326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.513 [INFO][6326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" HandleID="k8s-pod-network.4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--bfmd4-eth0" Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.528 [INFO][6326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:05.552413 containerd[2010]: 2025-01-29 12:07:05.545 [INFO][6320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491" Jan 29 12:07:05.552413 containerd[2010]: time="2025-01-29T12:07:05.550683506Z" level=info msg="TearDown network for sandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" successfully" Jan 29 12:07:05.604184 containerd[2010]: time="2025-01-29T12:07:05.604054839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:05.649679 containerd[2010]: time="2025-01-29T12:07:05.649636887Z" level=info msg="RemovePodSandbox \"4e9ab0a5f194f523b411b219a8e12ca8bc89d8c056aaeada65f5fb6d627fc491\" returns successfully" Jan 29 12:07:05.651337 containerd[2010]: time="2025-01-29T12:07:05.651298581Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.763 [WARNING][6350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bbdf3301-3539-4945-83b3-f31451672e0c", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6", Pod:"coredns-7db6d8ff4d-6glqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia78caeeaafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.764 [INFO][6350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.764 [INFO][6350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" iface="eth0" netns="" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.765 [INFO][6350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.765 [INFO][6350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.831 [INFO][6357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.831 [INFO][6357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.831 [INFO][6357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.855 [WARNING][6357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.855 [INFO][6357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.858 [INFO][6357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:05.865783 containerd[2010]: 2025-01-29 12:07:05.861 [INFO][6350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:05.865783 containerd[2010]: time="2025-01-29T12:07:05.865627552Z" level=info msg="TearDown network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" successfully" Jan 29 12:07:05.865783 containerd[2010]: time="2025-01-29T12:07:05.865660893Z" level=info msg="StopPodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" returns successfully" Jan 29 12:07:05.875375 containerd[2010]: time="2025-01-29T12:07:05.866748084Z" level=info msg="RemovePodSandbox for \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:07:05.875375 containerd[2010]: time="2025-01-29T12:07:05.866786766Z" level=info msg="Forcibly stopping sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\"" Jan 29 12:07:05.959938 sshd[6302]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:05.979485 systemd[1]: sshd@24-172.31.19.14:22-139.178.68.195:36106.service: Deactivated successfully. Jan 29 12:07:05.990206 systemd-logind[1987]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:07:05.990855 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:07:06.029758 systemd-logind[1987]: Removed session 25. Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:05.967 [WARNING][6375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bbdf3301-3539-4945-83b3-f31451672e0c", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"c63ededfd2811fcb6f1da277846263b278f98dc07fbf2adf94a2ea6bfdbfdba6", Pod:"coredns-7db6d8ff4d-6glqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia78caeeaafb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:05.969 [INFO][6375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:05.970 [INFO][6375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" iface="eth0" netns="" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:05.970 [INFO][6375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:05.970 [INFO][6375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.076 [INFO][6381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.076 [INFO][6381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.076 [INFO][6381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.084 [WARNING][6381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.084 [INFO][6381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" HandleID="k8s-pod-network.c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--6glqj-eth0" Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.086 [INFO][6381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.093537 containerd[2010]: 2025-01-29 12:07:06.090 [INFO][6375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65" Jan 29 12:07:06.094667 containerd[2010]: time="2025-01-29T12:07:06.094610887Z" level=info msg="TearDown network for sandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" successfully" Jan 29 12:07:06.101165 containerd[2010]: time="2025-01-29T12:07:06.101114230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:06.101369 containerd[2010]: time="2025-01-29T12:07:06.101211711Z" level=info msg="RemovePodSandbox \"c00b30d00cabe71fbd21668c8d2eaa2aa8c0cc2d6069c588e6a43cdf6d937f65\" returns successfully" Jan 29 12:07:06.102083 containerd[2010]: time="2025-01-29T12:07:06.102047108Z" level=info msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.202 [WARNING][6402] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"393c076b-4fd5-42ce-ac5b-7c010e93a9f4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca", Pod:"coredns-7db6d8ff4d-8nm9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1e99c52a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.203 [INFO][6402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.203 [INFO][6402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" iface="eth0" netns="" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.203 [INFO][6402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.203 [INFO][6402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.237 [INFO][6409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.237 [INFO][6409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.237 [INFO][6409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.249 [WARNING][6409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.249 [INFO][6409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.252 [INFO][6409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.256129 containerd[2010]: 2025-01-29 12:07:06.254 [INFO][6402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.259963 containerd[2010]: time="2025-01-29T12:07:06.256205521Z" level=info msg="TearDown network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" successfully" Jan 29 12:07:06.259963 containerd[2010]: time="2025-01-29T12:07:06.256237218Z" level=info msg="StopPodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" returns successfully" Jan 29 12:07:06.259963 containerd[2010]: time="2025-01-29T12:07:06.256709930Z" level=info msg="RemovePodSandbox for \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" Jan 29 12:07:06.259963 containerd[2010]: time="2025-01-29T12:07:06.256743203Z" level=info msg="Forcibly stopping sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\"" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.345 [WARNING][6427] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"393c076b-4fd5-42ce-ac5b-7c010e93a9f4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b643fc6f14c5b9ed0b2f0e19d754cb603976e357c7df36efd53b5f4dbd2b72ca", Pod:"coredns-7db6d8ff4d-8nm9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1e99c52a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.347 [INFO][6427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.347 [INFO][6427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" iface="eth0" netns="" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.347 [INFO][6427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.347 [INFO][6427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.378 [INFO][6434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.379 [INFO][6434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.379 [INFO][6434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.385 [WARNING][6434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.385 [INFO][6434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" HandleID="k8s-pod-network.4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Workload="ip--172--31--19--14-k8s-coredns--7db6d8ff4d--8nm9g-eth0" Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.386 [INFO][6434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.391908 containerd[2010]: 2025-01-29 12:07:06.390 [INFO][6427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40" Jan 29 12:07:06.393267 containerd[2010]: time="2025-01-29T12:07:06.391943722Z" level=info msg="TearDown network for sandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" successfully" Jan 29 12:07:06.399865 containerd[2010]: time="2025-01-29T12:07:06.399814486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:06.399865 containerd[2010]: time="2025-01-29T12:07:06.399897149Z" level=info msg="RemovePodSandbox \"4bc1089cb4aeb76c8064f39548e86b371492872da6f7c930e1ca277c0a355f40\" returns successfully" Jan 29 12:07:06.400680 containerd[2010]: time="2025-01-29T12:07:06.400652636Z" level=info msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.447 [WARNING][6452] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a677651-8370-4ade-886d-e86025868e97", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910", Pod:"calico-apiserver-5979bcbbd4-mbmww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96c052cd7d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.447 [INFO][6452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.447 [INFO][6452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" iface="eth0" netns="" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.447 [INFO][6452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.447 [INFO][6452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.474 [INFO][6458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.474 [INFO][6458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.474 [INFO][6458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.481 [WARNING][6458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.481 [INFO][6458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.483 [INFO][6458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.486969 containerd[2010]: 2025-01-29 12:07:06.485 [INFO][6452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.487776 containerd[2010]: time="2025-01-29T12:07:06.487051677Z" level=info msg="TearDown network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" successfully" Jan 29 12:07:06.487776 containerd[2010]: time="2025-01-29T12:07:06.487086663Z" level=info msg="StopPodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" returns successfully" Jan 29 12:07:06.487776 containerd[2010]: time="2025-01-29T12:07:06.487658606Z" level=info msg="RemovePodSandbox for \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" Jan 29 12:07:06.487776 containerd[2010]: time="2025-01-29T12:07:06.487692665Z" level=info msg="Forcibly stopping sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\"" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.575 [WARNING][6477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0", GenerateName:"calico-apiserver-5979bcbbd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a677651-8370-4ade-886d-e86025868e97", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcbbd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"8d20abf4a3bf714b050847cc73935170e059695029a3812b0d66c1a4f98dd910", Pod:"calico-apiserver-5979bcbbd4-mbmww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96c052cd7d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.576 [INFO][6477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.576 [INFO][6477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" iface="eth0" netns="" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.576 [INFO][6477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.576 [INFO][6477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.617 [INFO][6484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.617 [INFO][6484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.617 [INFO][6484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.626 [WARNING][6484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.626 [INFO][6484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" HandleID="k8s-pod-network.66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Workload="ip--172--31--19--14-k8s-calico--apiserver--5979bcbbd4--mbmww-eth0" Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.628 [INFO][6484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.634263 containerd[2010]: 2025-01-29 12:07:06.632 [INFO][6477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311" Jan 29 12:07:06.634263 containerd[2010]: time="2025-01-29T12:07:06.634235064Z" level=info msg="TearDown network for sandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" successfully" Jan 29 12:07:06.648282 containerd[2010]: time="2025-01-29T12:07:06.648130778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:06.648425 containerd[2010]: time="2025-01-29T12:07:06.648403023Z" level=info msg="RemovePodSandbox \"66b980fa049646add27cd57ed5d6d02104cb18f07824c66b3ead4c9c0a0cc311\" returns successfully" Jan 29 12:07:06.651986 containerd[2010]: time="2025-01-29T12:07:06.651625584Z" level=info msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.702 [WARNING][6502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0", GenerateName:"calico-kube-controllers-f7c9d9464-", Namespace:"calico-system", SelfLink:"", UID:"f814fb92-3f75-45fa-afb5-f59e7f19b575", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9d9464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce", Pod:"calico-kube-controllers-f7c9d9464-548vs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf371032d1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.703 [INFO][6502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.703 [INFO][6502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" iface="eth0" netns="" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.703 [INFO][6502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.703 [INFO][6502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.732 [INFO][6508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.733 [INFO][6508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.733 [INFO][6508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.740 [WARNING][6508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.740 [INFO][6508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.743 [INFO][6508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.747904 containerd[2010]: 2025-01-29 12:07:06.746 [INFO][6502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.749880 containerd[2010]: time="2025-01-29T12:07:06.747962626Z" level=info msg="TearDown network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" successfully" Jan 29 12:07:06.749880 containerd[2010]: time="2025-01-29T12:07:06.747994571Z" level=info msg="StopPodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" returns successfully" Jan 29 12:07:06.749880 containerd[2010]: time="2025-01-29T12:07:06.748787182Z" level=info msg="RemovePodSandbox for \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" Jan 29 12:07:06.749880 containerd[2010]: time="2025-01-29T12:07:06.748906841Z" level=info msg="Forcibly stopping sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\"" Jan 29 12:07:06.786913 systemd-journald[1497]: Under memory pressure, flushing caches. Jan 29 12:07:06.786468 systemd-resolved[1893]: Under memory pressure, flushing caches. Jan 29 12:07:06.786763 systemd-resolved[1893]: Flushed all caches. Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.811 [WARNING][6526] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0", GenerateName:"calico-kube-controllers-f7c9d9464-", Namespace:"calico-system", SelfLink:"", UID:"f814fb92-3f75-45fa-afb5-f59e7f19b575", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9d9464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"7a3ffe4bf7cfc5fbe10079754212983c9b9baf49e0576cecd2a025edf78ff3ce", Pod:"calico-kube-controllers-f7c9d9464-548vs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf371032d1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.811 [INFO][6526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.811 [INFO][6526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" iface="eth0" netns="" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.811 [INFO][6526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.811 [INFO][6526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.843 [INFO][6532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.843 [INFO][6532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.843 [INFO][6532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.851 [WARNING][6532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.851 [INFO][6532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" HandleID="k8s-pod-network.7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Workload="ip--172--31--19--14-k8s-calico--kube--controllers--f7c9d9464--548vs-eth0" Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.853 [INFO][6532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:06.859282 containerd[2010]: 2025-01-29 12:07:06.857 [INFO][6526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4" Jan 29 12:07:06.860299 containerd[2010]: time="2025-01-29T12:07:06.859346404Z" level=info msg="TearDown network for sandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" successfully" Jan 29 12:07:06.873711 containerd[2010]: time="2025-01-29T12:07:06.873652834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:06.874012 containerd[2010]: time="2025-01-29T12:07:06.873826669Z" level=info msg="RemovePodSandbox \"7f42ff382403a2288794b845b6130e62ff019c2729e6c2252bfc1cd13579fcd4\" returns successfully" Jan 29 12:07:06.874596 containerd[2010]: time="2025-01-29T12:07:06.874383206Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:06.976 [WARNING][6550] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e3ae492-9704-4aa3-aacf-00b3ecf4f562", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f", Pod:"csi-node-driver-ht95p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24123374334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:06.976 [INFO][6550] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:06.976 [INFO][6550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" iface="eth0" netns="" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:06.976 [INFO][6550] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:06.976 [INFO][6550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.002 [INFO][6557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.002 [INFO][6557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.002 [INFO][6557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.026 [WARNING][6557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.026 [INFO][6557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.029 [INFO][6557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:07.033369 containerd[2010]: 2025-01-29 12:07:07.031 [INFO][6550] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.034038 containerd[2010]: time="2025-01-29T12:07:07.033411788Z" level=info msg="TearDown network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" successfully" Jan 29 12:07:07.034038 containerd[2010]: time="2025-01-29T12:07:07.033442216Z" level=info msg="StopPodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" returns successfully" Jan 29 12:07:07.034038 containerd[2010]: time="2025-01-29T12:07:07.033946244Z" level=info msg="RemovePodSandbox for \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:07:07.034038 containerd[2010]: time="2025-01-29T12:07:07.033979248Z" level=info msg="Forcibly stopping sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\"" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.077 [WARNING][6575] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e3ae492-9704-4aa3-aacf-00b3ecf4f562", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-14", ContainerID:"b4f0dfcf05c929d582aef4cfbdcf849b2c83afc9eaa48717e61af268d0a8fc0f", Pod:"csi-node-driver-ht95p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24123374334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.077 [INFO][6575] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.077 [INFO][6575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" iface="eth0" netns="" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.077 [INFO][6575] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.077 [INFO][6575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.104 [INFO][6581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.105 [INFO][6581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.105 [INFO][6581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.111 [WARNING][6581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.111 [INFO][6581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" HandleID="k8s-pod-network.06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Workload="ip--172--31--19--14-k8s-csi--node--driver--ht95p-eth0" Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.114 [INFO][6581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:07:07.118092 containerd[2010]: 2025-01-29 12:07:07.116 [INFO][6575] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f" Jan 29 12:07:07.118982 containerd[2010]: time="2025-01-29T12:07:07.118138513Z" level=info msg="TearDown network for sandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" successfully" Jan 29 12:07:07.122960 containerd[2010]: time="2025-01-29T12:07:07.122916760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:07:07.123093 containerd[2010]: time="2025-01-29T12:07:07.122991252Z" level=info msg="RemovePodSandbox \"06f43aea7fc4c3dc32251f7ea10cc944d7f04cd76d69e021ea1eb064d0f4d31f\" returns successfully"