Jan 29 12:03:34.095186 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:03:34.095235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:34.095251 kernel: BIOS-provided physical RAM map: Jan 29 12:03:34.095263 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:03:34.095273 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:03:34.095284 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:03:34.095300 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 12:03:34.095311 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 12:03:34.095322 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 12:03:34.095334 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:03:34.095346 kernel: NX (Execute Disable) protection: active Jan 29 12:03:34.095357 kernel: APIC: Static calls initialized Jan 29 12:03:34.095369 kernel: SMBIOS 2.7 present. Jan 29 12:03:34.095380 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 12:03:34.095399 kernel: Hypervisor detected: KVM Jan 29 12:03:34.095413 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:03:34.095427 kernel: kvm-clock: using sched offset of 6515261453 cycles Jan 29 12:03:34.095441 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:03:34.095456 kernel: tsc: Detected 2499.996 MHz processor Jan 29 12:03:34.095571 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:03:34.095586 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:03:34.095603 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 12:03:34.095619 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:03:34.095631 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:03:34.095645 kernel: Using GB pages for direct mapping Jan 29 12:03:34.095659 kernel: ACPI: Early table checksum verification disabled Jan 29 12:03:34.095674 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 12:03:34.095686 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 12:03:34.095699 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 12:03:34.095713 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 12:03:34.095732 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 12:03:34.095744 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:34.095757 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 12:03:34.095825 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 12:03:34.095844 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 12:03:34.095860 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 12:03:34.095876 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 12:03:34.095890 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 12:03:34.095905 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 12:03:34.095924 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 12:03:34.095946 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 12:03:34.095961 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 12:03:34.096051 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 12:03:34.096071 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 12:03:34.096091 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 12:03:34.096106 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 12:03:34.096163 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 12:03:34.096179 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 12:03:34.096195 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:03:34.096209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:03:34.096224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 12:03:34.096241 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 12:03:34.096327 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 12:03:34.097095 kernel: Zone ranges: Jan 29 12:03:34.097112 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:03:34.097126 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 12:03:34.097141 kernel: Normal empty Jan 29 12:03:34.097154 kernel: Movable zone start for each node Jan 29 12:03:34.097167 kernel: Early memory node ranges Jan 29 12:03:34.097181 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:03:34.097242 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 12:03:34.097260 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 12:03:34.097273 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:03:34.097292 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:03:34.097305 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 12:03:34.097319 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 12:03:34.097333 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:03:34.097401 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 12:03:34.097420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:03:34.097437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:03:34.097453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:03:34.098017 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:03:34.098887 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:03:34.099006 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:03:34.099044 kernel: TSC deadline timer available Jan 29 12:03:34.099060 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:03:34.099075 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:03:34.099090 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 12:03:34.099106 kernel: Booting paravirtualized kernel on KVM Jan 29 12:03:34.099122 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:03:34.099138 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:03:34.099158 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:03:34.099173 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:03:34.099189 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:03:34.099204 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:03:34.099220 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:03:34.099237 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:34.099253 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:03:34.099269 kernel: random: crng init done Jan 29 12:03:34.099287 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:03:34.099303 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:03:34.099319 kernel: Fallback order for Node 0: 0 Jan 29 12:03:34.099334 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 12:03:34.099350 kernel: Policy zone: DMA32 Jan 29 12:03:34.099365 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:03:34.099420 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Jan 29 12:03:34.099435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:03:34.099451 kernel: Kernel/User page tables isolation: enabled Jan 29 12:03:34.099470 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:03:34.099485 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:03:34.099501 kernel: Dynamic Preempt: voluntary Jan 29 12:03:34.099515 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:03:34.099532 kernel: rcu: RCU event tracing is enabled. Jan 29 12:03:34.099547 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:03:34.099563 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:03:34.099579 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:03:34.099594 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:03:34.099612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:03:34.099628 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:03:34.099644 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:03:34.099659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:03:34.099674 kernel: Console: colour VGA+ 80x25 Jan 29 12:03:34.099690 kernel: printk: console [ttyS0] enabled Jan 29 12:03:34.099705 kernel: ACPI: Core revision 20230628 Jan 29 12:03:34.099721 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 12:03:34.099736 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:03:34.099754 kernel: x2apic enabled Jan 29 12:03:34.099867 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:03:34.099902 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:34.099922 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 12:03:34.099939 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:03:34.099955 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:03:34.099972 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:03:34.099987 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:03:34.100003 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:03:34.100020 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:03:34.100169 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:03:34.100187 kernel: RETBleed: Vulnerable Jan 29 12:03:34.100203 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:03:34.100223 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:34.100239 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:34.100325 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:03:34.100347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:03:34.100365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:03:34.100381 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:03:34.100402 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 12:03:34.100524 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 12:03:34.100542 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:03:34.100559 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:03:34.100575 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:03:34.100592 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 12:03:34.100608 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:03:34.100624 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 12:03:34.100641 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 12:03:34.100657 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 12:03:34.100674 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 12:03:34.100694 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 12:03:34.100710 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 12:03:34.100727 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 12:03:34.100743 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:03:34.100759 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:03:34.100775 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:03:34.100792 kernel: landlock: Up and running. Jan 29 12:03:34.100808 kernel: SELinux: Initializing. Jan 29 12:03:34.100824 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:34.100840 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 12:03:34.100856 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:03:34.100876 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:34.100893 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:34.100910 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:34.100927 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:03:34.100944 kernel: signal: max sigframe size: 3632 Jan 29 12:03:34.100960 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:03:34.100977 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:03:34.100994 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:03:34.101010 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:03:34.101097 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:03:34.101116 kernel: .... node #0, CPUs: #1 Jan 29 12:03:34.101135 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 12:03:34.101153 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:03:34.101169 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:03:34.101186 kernel: smpboot: Max logical packages: 1 Jan 29 12:03:34.101203 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 12:03:34.101220 kernel: devtmpfs: initialized Jan 29 12:03:34.101236 kernel: x86/mm: Memory block size: 128MB Jan 29 12:03:34.101257 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:03:34.101274 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:03:34.101291 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:03:34.101308 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:03:34.101324 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:03:34.101341 kernel: audit: type=2000 audit(1738152212.946:1): state=initialized audit_enabled=0 res=1 Jan 29 12:03:34.101357 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:03:34.101373 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:03:34.101393 kernel: cpuidle: using governor menu Jan 29 12:03:34.101413 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:03:34.101429 kernel: dca service started, version 1.12.1 Jan 29 12:03:34.101446 kernel: PCI: Using configuration type 1 for base access Jan 29 12:03:34.101463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:03:34.101479 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:03:34.101495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:03:34.101512 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:03:34.101528 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:03:34.101544 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:03:34.101564 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:03:34.101580 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:03:34.101596 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:03:34.101613 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 12:03:34.101630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:03:34.101645 kernel: ACPI: Interpreter enabled Jan 29 12:03:34.101660 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:03:34.101676 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:03:34.101692 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:03:34.101719 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:03:34.101735 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 12:03:34.101750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:03:34.102006 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:03:34.102282 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:03:34.102427 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:03:34.102448 kernel: acpiphp: Slot [3] registered Jan 29 12:03:34.102470 kernel: acpiphp: Slot [4] registered Jan 29 12:03:34.102487 kernel: acpiphp: Slot [5] registered Jan 29 12:03:34.102503 kernel: acpiphp: Slot [6] registered Jan 29 12:03:34.102519 kernel: acpiphp: Slot [7] registered Jan 29 12:03:34.102537 kernel: acpiphp: Slot [8] registered Jan 29 12:03:34.102553 kernel: acpiphp: Slot [9] registered Jan 29 12:03:34.102570 kernel: acpiphp: Slot [10] registered Jan 29 12:03:34.102587 kernel: acpiphp: Slot [11] registered Jan 29 12:03:34.102603 kernel: acpiphp: Slot [12] registered Jan 29 12:03:34.102623 kernel: acpiphp: Slot [13] registered Jan 29 12:03:34.102640 kernel: acpiphp: Slot [14] registered Jan 29 12:03:34.102656 kernel: acpiphp: Slot [15] registered Jan 29 12:03:34.102673 kernel: acpiphp: Slot [16] registered Jan 29 12:03:34.102687 kernel: acpiphp: Slot [17] registered Jan 29 12:03:34.102703 kernel: acpiphp: Slot [18] registered Jan 29 12:03:34.102719 kernel: acpiphp: Slot [19] registered Jan 29 12:03:34.102736 kernel: acpiphp: Slot [20] registered Jan 29 12:03:34.102752 kernel: acpiphp: Slot [21] registered Jan 29 12:03:34.102768 kernel: acpiphp: Slot [22] registered Jan 29 12:03:34.102861 kernel: acpiphp: Slot [23] registered Jan 29 12:03:34.102879 kernel: acpiphp: Slot [24] registered Jan 29 12:03:34.102895 kernel: acpiphp: Slot [25] registered Jan 29 12:03:34.102909 kernel: acpiphp: Slot [26] registered Jan 29 12:03:34.102923 kernel: acpiphp: Slot [27] registered Jan 29 12:03:34.102940 kernel: acpiphp: Slot [28] registered Jan 29 12:03:34.102956 kernel: acpiphp: Slot [29] registered Jan 29 12:03:34.102973 kernel: acpiphp: Slot [30] registered Jan 29 12:03:34.102989 kernel: acpiphp: Slot [31] registered Jan 29 12:03:34.103009 kernel: PCI host bridge to bus 0000:00 Jan 29 12:03:34.103188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:03:34.103315 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:03:34.103440 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:03:34.103630 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 12:03:34.103755 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:03:34.104064 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:03:34.104389 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:03:34.104669 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 12:03:34.104824 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 12:03:34.106545 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 12:03:34.106763 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 12:03:34.106981 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 12:03:34.107203 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 12:03:34.107353 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 12:03:34.107487 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 12:03:34.107622 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 12:03:34.107767 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 12:03:34.108381 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 12:03:34.108597 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 12:03:34.108822 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:03:34.108965 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 12:03:34.109163 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 12:03:34.109312 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 12:03:34.109511 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 12:03:34.109532 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:03:34.109547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:03:34.109561 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:03:34.109580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:03:34.109593 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:03:34.109607 kernel: iommu: Default domain type: Translated Jan 29 12:03:34.109620 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:03:34.111457 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:03:34.111475 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:03:34.111492 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:03:34.111508 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 12:03:34.111689 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 12:03:34.111951 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 12:03:34.112240 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:03:34.112410 kernel: vgaarb: loaded Jan 29 12:03:34.112490 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 12:03:34.112507 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 12:03:34.112523 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:03:34.112539 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:03:34.112556 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:03:34.112579 kernel: pnp: PnP ACPI init Jan 29 12:03:34.112596 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:03:34.112754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:03:34.112772 kernel: NET: Registered PF_INET protocol family Jan 29 12:03:34.112789 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:03:34.112806 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 12:03:34.112822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:03:34.112839 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:34.112856 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:03:34.112877 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 12:03:34.112894 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:34.112910 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 12:03:34.112926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:03:34.112942 kernel: NET: Registered PF_XDP protocol family Jan 29 12:03:34.113233 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:03:34.113361 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:03:34.113541 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:03:34.113665 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 12:03:34.113890 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:03:34.113913 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:03:34.113931 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:03:34.113946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 12:03:34.113963 kernel: clocksource: Switched to clocksource tsc Jan 29 12:03:34.113978 kernel: Initialise system trusted keyrings Jan 29 12:03:34.113994 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 12:03:34.114015 kernel: Key type asymmetric registered Jan 29 12:03:34.114041 kernel: Asymmetric key parser 'x509' registered Jan 29 12:03:34.114057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:03:34.114108 kernel: io scheduler mq-deadline registered Jan 29 12:03:34.114125 kernel: io scheduler kyber registered Jan 29 12:03:34.114141 kernel: io scheduler bfq registered Jan 29 12:03:34.114156 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:03:34.114172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:03:34.114188 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:03:34.114203 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:03:34.114223 kernel: i8042: Warning: Keylock active Jan 29 12:03:34.114238 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:03:34.114253 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:03:34.114411 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 12:03:34.114531 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 12:03:34.114647 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T12:03:33 UTC (1738152213) Jan 29 12:03:34.114765 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 12:03:34.114860 kernel: intel_pstate: CPU model not supported Jan 29 12:03:34.114878 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:03:34.114894 kernel: Segment Routing with IPv6 Jan 29 12:03:34.114910 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:03:34.114926 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:03:34.114941 kernel: Key type dns_resolver registered Jan 29 12:03:34.114956 kernel: IPI shorthand broadcast: enabled Jan 29 12:03:34.114972 kernel: sched_clock: Marking stable (535001437, 338943229)->(1013243499, -139298833) Jan 29 12:03:34.114988 kernel: registered taskstats version 1 Jan 29 12:03:34.115004 kernel: Loading compiled-in X.509 certificates Jan 29 12:03:34.115162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:03:34.115179 kernel: Key type .fscrypt registered Jan 29 12:03:34.115192 kernel: Key type fscrypt-provisioning registered Jan 29 12:03:34.115207 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:03:34.115221 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:03:34.115234 kernel: ima: No architecture policies found Jan 29 12:03:34.115248 kernel: clk: Disabling unused clocks Jan 29 12:03:34.115262 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:03:34.115279 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:03:34.115293 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:03:34.115306 kernel: Run /init as init process Jan 29 12:03:34.115320 kernel: with arguments: Jan 29 12:03:34.115334 kernel: /init Jan 29 12:03:34.115347 kernel: with environment: Jan 29 12:03:34.115360 kernel: HOME=/ Jan 29 12:03:34.115373 kernel: TERM=linux Jan 29 12:03:34.115387 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:03:34.115406 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:34.115435 systemd[1]: Detected virtualization amazon. Jan 29 12:03:34.115455 systemd[1]: Detected architecture x86-64. Jan 29 12:03:34.115470 systemd[1]: Running in initrd. Jan 29 12:03:34.115546 systemd[1]: No hostname configured, using default hostname. Jan 29 12:03:34.115566 systemd[1]: Hostname set to . Jan 29 12:03:34.115582 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:03:34.115596 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:03:34.115611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:34.115626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:34.115642 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:03:34.115656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:34.115672 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:03:34.115692 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:03:34.115709 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:03:34.115724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:03:34.115739 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:34.115754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:34.115769 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:34.115840 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:34.115860 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:34.115875 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:34.115889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:34.115904 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:34.115919 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:03:34.115934 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:03:34.115949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:34.115964 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:34.115981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:34.115996 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:03:34.116015 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:34.116058 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:03:34.116073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:34.116127 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:03:34.116143 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:03:34.116162 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:34.116180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:34.116228 systemd-journald[178]: Collecting audit messages is disabled. Jan 29 12:03:34.116263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:34.116330 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:34.116349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:34.116365 systemd-journald[178]: Journal started Jan 29 12:03:34.116396 systemd-journald[178]: Runtime Journal (/run/log/journal/ec285b710265e54650154ac64e2be9c5) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:03:34.119042 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:34.120750 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:03:34.123905 systemd-modules-load[179]: Inserted module 'overlay' Jan 29 12:03:34.138340 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:34.154491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:34.187209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:03:34.190841 kernel: Bridge firewalling registered Jan 29 12:03:34.189549 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 29 12:03:34.191491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:34.347687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:34.349284 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:34.361640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:34.370950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:34.385408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:34.388785 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:34.408067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:34.411323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:34.414630 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:34.428276 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:03:34.459299 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:34.508766 dracut-cmdline[213]: dracut-dracut-053 Jan 29 12:03:34.508766 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:34.523658 systemd-resolved[214]: Positive Trust Anchors: Jan 29 12:03:34.523674 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:34.523738 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:34.539797 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 29 12:03:34.542919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:34.544346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:34.616056 kernel: SCSI subsystem initialized Jan 29 12:03:34.627052 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:03:34.639053 kernel: iscsi: registered transport (tcp) Jan 29 12:03:34.677063 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:03:34.677146 kernel: QLogic iSCSI HBA Driver Jan 29 12:03:34.737130 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:34.742215 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:03:34.774493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:03:34.774576 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:03:34.774600 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:03:34.826081 kernel: raid6: avx512x4 gen() 15061 MB/s Jan 29 12:03:34.847069 kernel: raid6: avx512x2 gen() 10234 MB/s Jan 29 12:03:34.864067 kernel: raid6: avx512x1 gen() 5002 MB/s Jan 29 12:03:34.881078 kernel: raid6: avx2x4 gen() 15812 MB/s Jan 29 12:03:34.898121 kernel: raid6: avx2x2 gen() 15660 MB/s Jan 29 12:03:34.915067 kernel: raid6: avx2x1 gen() 11786 MB/s Jan 29 12:03:34.915177 kernel: raid6: using algorithm avx2x4 gen() 15812 MB/s Jan 29 12:03:34.932087 kernel: raid6: .... xor() 6487 MB/s, rmw enabled Jan 29 12:03:34.932167 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:03:34.956057 kernel: xor: automatically using best checksumming function avx Jan 29 12:03:35.147055 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:03:35.158599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:35.164248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:35.209832 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 29 12:03:35.219228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:35.236346 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:03:35.302567 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 29 12:03:35.353768 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:35.360255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:35.460474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:35.474139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:03:35.531455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:35.538875 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:35.545942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:35.550050 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:35.563295 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:03:35.612267 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:35.642045 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:03:35.677627 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 12:03:35.696630 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 12:03:35.696911 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 12:03:35.700546 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:03:35.700586 kernel: AES CTR mode by8 optimization enabled Jan 29 12:03:35.700605 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:80:9b:f6:bd:43 Jan 29 12:03:35.688477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:35.688672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:35.691187 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:35.695921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:35.696164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:35.702651 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:35.735527 (udev-worker)[461]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:35.737875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:35.765053 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 12:03:35.765347 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:03:35.778051 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 12:03:35.785097 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:03:35.785163 kernel: GPT:9289727 != 16777215 Jan 29 12:03:35.785184 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:03:35.785205 kernel: GPT:9289727 != 16777215 Jan 29 12:03:35.785231 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:03:35.785249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:35.866067 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (452) Jan 29 12:03:35.870045 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (451) Jan 29 12:03:35.990351 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 12:03:35.993638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:36.017731 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 12:03:36.026565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:03:36.047356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 12:03:36.047885 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 12:03:36.067579 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:03:36.078506 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:36.092056 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:36.092258 disk-uuid[623]: Primary Header is updated. Jan 29 12:03:36.092258 disk-uuid[623]: Secondary Entries is updated. Jan 29 12:03:36.092258 disk-uuid[623]: Secondary Header is updated. Jan 29 12:03:36.100010 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:36.101320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:36.120100 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:37.114051 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:03:37.114887 disk-uuid[629]: The operation has completed successfully. Jan 29 12:03:37.385929 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:03:37.386091 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:03:37.414584 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:03:37.419679 sh[975]: Success Jan 29 12:03:37.437120 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:03:37.568080 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:03:37.582256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:03:37.583899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:03:37.628450 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:03:37.628517 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:37.628532 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:03:37.631851 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:03:37.631899 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:03:37.676080 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:03:37.681891 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:03:37.684793 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:03:37.692253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:03:37.699358 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:03:37.729447 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:37.729513 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:37.729539 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:37.738082 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:37.757339 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:03:37.759704 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:37.769995 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:03:37.780383 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:03:37.890580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:37.901295 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:37.951211 systemd-networkd[1181]: lo: Link UP Jan 29 12:03:37.951223 systemd-networkd[1181]: lo: Gained carrier Jan 29 12:03:37.958737 systemd-networkd[1181]: Enumeration completed Jan 29 12:03:37.959306 systemd-networkd[1181]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:37.959312 systemd-networkd[1181]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:37.960786 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:37.962480 systemd[1]: Reached target network.target - Network. Jan 29 12:03:37.976767 systemd-networkd[1181]: eth0: Link UP Jan 29 12:03:37.976773 systemd-networkd[1181]: eth0: Gained carrier Jan 29 12:03:37.976789 systemd-networkd[1181]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:38.006990 systemd-networkd[1181]: eth0: DHCPv4 address 172.31.23.23/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:03:38.023704 ignition[1107]: Ignition 2.19.0 Jan 29 12:03:38.023718 ignition[1107]: Stage: fetch-offline Jan 29 12:03:38.024085 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:38.024098 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:38.024557 ignition[1107]: Ignition finished successfully Jan 29 12:03:38.030316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:38.042237 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:03:38.062287 ignition[1189]: Ignition 2.19.0 Jan 29 12:03:38.062301 ignition[1189]: Stage: fetch Jan 29 12:03:38.063256 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:38.063270 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:38.063389 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:38.168307 ignition[1189]: PUT result: OK Jan 29 12:03:38.175070 ignition[1189]: parsed url from cmdline: "" Jan 29 12:03:38.175088 ignition[1189]: no config URL provided Jan 29 12:03:38.175100 ignition[1189]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:38.175118 ignition[1189]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:38.175145 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:38.182734 ignition[1189]: PUT result: OK Jan 29 12:03:38.183876 ignition[1189]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 12:03:38.188295 ignition[1189]: GET result: OK Jan 29 12:03:38.188450 ignition[1189]: parsing config with SHA512: 6a6cdd4e419782a0f57f144c312dfc3b4ce3383f2cfa401e3f146866093da3bbfaef13ac4e3b809da40e35981281ba3d7d6e6500e02aabe203c52dabcd6d7454 Jan 29 12:03:38.199443 unknown[1189]: fetched base config from "system" Jan 29 12:03:38.199458 unknown[1189]: fetched base config from "system" Jan 29 12:03:38.200945 ignition[1189]: fetch: fetch complete Jan 29 12:03:38.199471 unknown[1189]: fetched user config from "aws" Jan 29 12:03:38.200954 ignition[1189]: fetch: fetch passed Jan 29 12:03:38.212707 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:03:38.201163 ignition[1189]: Ignition finished successfully Jan 29 12:03:38.224250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:03:38.247833 ignition[1195]: Ignition 2.19.0 Jan 29 12:03:38.247847 ignition[1195]: Stage: kargs Jan 29 12:03:38.249364 ignition[1195]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:38.249380 ignition[1195]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:38.249587 ignition[1195]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:38.255011 ignition[1195]: PUT result: OK Jan 29 12:03:38.261277 ignition[1195]: kargs: kargs passed Jan 29 12:03:38.261396 ignition[1195]: Ignition finished successfully Jan 29 12:03:38.265560 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:03:38.277381 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:03:38.303590 ignition[1201]: Ignition 2.19.0 Jan 29 12:03:38.303604 ignition[1201]: Stage: disks Jan 29 12:03:38.304207 ignition[1201]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:38.304222 ignition[1201]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:38.304333 ignition[1201]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:38.306295 ignition[1201]: PUT result: OK Jan 29 12:03:38.314215 ignition[1201]: disks: disks passed Jan 29 12:03:38.314302 ignition[1201]: Ignition finished successfully Jan 29 12:03:38.317350 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:03:38.319046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:38.324662 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:03:38.326327 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:38.329112 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:38.332233 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:38.340206 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:03:38.389281 systemd-fsck[1210]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:03:38.393409 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:03:38.401217 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:03:38.579377 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:03:38.581020 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:03:38.584003 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:38.600170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:38.606245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:03:38.610516 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:03:38.610593 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:03:38.610630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:38.626046 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1229) Jan 29 12:03:38.629243 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:38.629310 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:38.629331 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:38.650599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:03:38.660319 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:38.660782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:03:38.675219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:38.896081 initrd-setup-root[1253]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:03:38.905248 initrd-setup-root[1260]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:03:38.915344 initrd-setup-root[1267]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:03:38.923612 initrd-setup-root[1274]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:03:39.131632 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:39.143200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:03:39.178811 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:03:39.192773 systemd-networkd[1181]: eth0: Gained IPv6LL Jan 29 12:03:39.217532 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:39.212736 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:03:39.270728 ignition[1341]: INFO : Ignition 2.19.0 Jan 29 12:03:39.270728 ignition[1341]: INFO : Stage: mount Jan 29 12:03:39.273218 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:39.271511 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:03:39.275675 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:39.275675 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:39.280480 ignition[1341]: INFO : PUT result: OK Jan 29 12:03:39.281985 ignition[1341]: INFO : mount: mount passed Jan 29 12:03:39.284106 ignition[1341]: INFO : Ignition finished successfully Jan 29 12:03:39.285938 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:03:39.294184 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:03:39.326365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:39.350084 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1353) Jan 29 12:03:39.352049 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:39.352122 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:39.353049 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:39.358056 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:39.360723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:39.385106 ignition[1370]: INFO : Ignition 2.19.0 Jan 29 12:03:39.385106 ignition[1370]: INFO : Stage: files Jan 29 12:03:39.388074 ignition[1370]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:39.388074 ignition[1370]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:39.388074 ignition[1370]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:39.393836 ignition[1370]: INFO : PUT result: OK Jan 29 12:03:39.399674 ignition[1370]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:03:39.423284 ignition[1370]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:03:39.423284 ignition[1370]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:03:39.431719 ignition[1370]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:03:39.433443 ignition[1370]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:03:39.435083 ignition[1370]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:03:39.433612 unknown[1370]: wrote ssh authorized keys file for user: core Jan 29 12:03:39.437885 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 12:03:39.437885 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 12:03:39.564244 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:03:39.747393 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 12:03:39.747393 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:03:39.752143 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 12:03:40.207425 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:03:40.900487 ignition[1370]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 12:03:40.900487 ignition[1370]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:03:40.906835 ignition[1370]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:40.909327 ignition[1370]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:40.909327 ignition[1370]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:03:40.913451 ignition[1370]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:40.913451 ignition[1370]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:40.913451 ignition[1370]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:40.913451 ignition[1370]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:40.913451 ignition[1370]: INFO : files: files passed Jan 29 12:03:40.913451 ignition[1370]: INFO : Ignition finished successfully Jan 29 12:03:40.923838 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:03:40.935275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:03:40.938565 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:03:40.947954 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:03:40.948352 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:03:40.970185 initrd-setup-root-after-ignition[1398]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:40.970185 initrd-setup-root-after-ignition[1398]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:40.974647 initrd-setup-root-after-ignition[1402]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:40.979692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:40.980500 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:03:40.991279 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:03:41.029637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:03:41.029790 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:03:41.032564 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:03:41.033953 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:03:41.036868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:03:41.044386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:03:41.063190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:41.071244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:03:41.089418 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:41.089888 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:41.091434 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:03:41.091813 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:03:41.092009 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:41.092617 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:03:41.092857 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:03:41.093146 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:03:41.093327 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:41.093512 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:41.093885 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:03:41.094245 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:41.094710 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:03:41.094962 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:03:41.095190 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:03:41.095315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:03:41.095499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:41.095929 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:41.096383 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:41.096518 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:03:41.111551 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:41.113156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:03:41.113321 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:41.120577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:03:41.121882 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:41.124345 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:03:41.124466 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:03:41.135968 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:03:41.143105 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:03:41.162778 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:03:41.163082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:41.178001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:03:41.178471 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:41.196116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:03:41.196695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:03:41.218654 ignition[1422]: INFO : Ignition 2.19.0 Jan 29 12:03:41.220112 ignition[1422]: INFO : Stage: umount Jan 29 12:03:41.220112 ignition[1422]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:41.220112 ignition[1422]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:41.224061 ignition[1422]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:41.225506 ignition[1422]: INFO : PUT result: OK Jan 29 12:03:41.229141 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:03:41.230853 ignition[1422]: INFO : umount: umount passed Jan 29 12:03:41.231895 ignition[1422]: INFO : Ignition finished successfully Jan 29 12:03:41.232295 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:03:41.232432 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:03:41.234839 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:03:41.234951 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:03:41.238054 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:03:41.238124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:03:41.243975 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:03:41.244078 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:03:41.247151 systemd[1]: Stopped target network.target - Network. Jan 29 12:03:41.250006 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:03:41.251237 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:41.255274 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:03:41.256444 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:03:41.257303 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:41.262212 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:03:41.263355 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:03:41.266816 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:03:41.266875 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:41.268624 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:03:41.268666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:41.270900 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:03:41.270952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:03:41.273524 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:03:41.273570 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:41.275116 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:03:41.276149 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:03:41.287781 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:03:41.287902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:03:41.288455 systemd-networkd[1181]: eth0: DHCPv6 lease lost Jan 29 12:03:41.295952 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:03:41.296175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:03:41.300315 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:03:41.300479 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:03:41.310613 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:03:41.310680 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:41.323730 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:03:41.323828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:41.348179 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:03:41.357407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:03:41.357520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:41.360638 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:03:41.360721 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:41.365723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:03:41.365927 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:41.367512 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:03:41.367579 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:41.370196 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:41.401528 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:03:41.401679 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:41.404054 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:03:41.404168 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:03:41.407059 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:03:41.407121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:41.409798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:03:41.409847 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:41.412834 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:03:41.412907 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:41.421684 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:03:41.421761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:41.426471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:41.426562 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:41.448243 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:03:41.449586 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:03:41.449669 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:41.451276 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:03:41.451346 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:41.452802 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:03:41.452866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:41.454457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:41.454526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:41.457393 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:03:41.457515 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:03:41.457715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:03:41.468754 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:03:41.481347 systemd[1]: Switching root. Jan 29 12:03:41.524565 systemd-journald[178]: Journal stopped Jan 29 12:03:43.156287 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 29 12:03:43.156522 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:03:43.156549 kernel: SELinux: policy capability open_perms=1 Jan 29 12:03:43.156574 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:03:43.156590 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:03:43.156615 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:03:43.156633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:03:43.156651 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:03:43.156673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:03:43.156691 kernel: audit: type=1403 audit(1738152221.872:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:03:43.156710 systemd[1]: Successfully loaded SELinux policy in 44.246ms. Jan 29 12:03:43.156741 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.981ms. Jan 29 12:03:43.156765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:43.156784 systemd[1]: Detected virtualization amazon. Jan 29 12:03:43.156803 systemd[1]: Detected architecture x86-64. Jan 29 12:03:43.156826 systemd[1]: Detected first boot. Jan 29 12:03:43.156844 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:03:43.156864 zram_generator::config[1466]: No configuration found. Jan 29 12:03:43.156884 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:03:43.156904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:03:43.156927 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:03:43.156946 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:03:43.156967 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:03:43.156986 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:03:43.157004 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:03:43.161054 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:03:43.161108 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:03:43.161129 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:03:43.161152 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:03:43.161180 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:03:43.161200 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:43.161220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:43.161239 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:03:43.161258 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:03:43.161278 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:03:43.161297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:43.161317 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:03:43.161335 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:43.161358 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:03:43.161377 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:03:43.161396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:43.161414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:03:43.161433 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:43.161453 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:43.161471 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:43.161490 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:43.161511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:03:43.161530 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:03:43.161554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:43.161573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:43.161592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:43.161611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:03:43.161630 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:03:43.161648 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:03:43.161671 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:03:43.161692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:43.161712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:03:43.161731 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:03:43.161761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:03:43.161781 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:03:43.161799 systemd[1]: Reached target machines.target - Containers. Jan 29 12:03:43.161822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:03:43.161841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:43.161864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:43.161884 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:03:43.161902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:43.161919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:43.161938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:43.161957 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:03:43.161974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:43.161993 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:03:43.162015 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:03:43.162210 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:03:43.162235 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:03:43.165853 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:03:43.165897 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:43.165917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:43.165936 kernel: fuse: init (API version 7.39) Jan 29 12:03:43.165956 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:03:43.165975 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:03:43.166004 kernel: loop: module loaded Jan 29 12:03:43.166032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:43.166051 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:03:43.166070 systemd[1]: Stopped verity-setup.service. Jan 29 12:03:43.166089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:43.166109 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:03:43.166127 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:03:43.166146 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:03:43.166165 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:03:43.166187 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:03:43.166205 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:03:43.166223 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:43.166241 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:03:43.166259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:03:43.166280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:43.166298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:43.166316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:43.166335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:43.166401 systemd-journald[1545]: Collecting audit messages is disabled. Jan 29 12:03:43.166438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:03:43.166462 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:03:43.166484 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:43.166504 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:43.166524 systemd-journald[1545]: Journal started Jan 29 12:03:43.166559 systemd-journald[1545]: Runtime Journal (/run/log/journal/ec285b710265e54650154ac64e2be9c5) is 4.8M, max 38.6M, 33.7M free. Jan 29 12:03:42.611316 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:03:43.169201 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:42.686187 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 12:03:42.686590 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:03:43.169537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:43.172186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:03:43.203962 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:03:43.218265 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:03:43.258160 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:03:43.263255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:43.278119 kernel: ACPI: bus type drm_connector registered Jan 29 12:03:43.274559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:43.279366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:43.283793 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:43.284081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:43.287251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:03:43.291090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:03:43.292793 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:03:43.301805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:03:43.303157 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:43.308390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:03:43.321431 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:03:43.332758 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:03:43.335406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:43.340723 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:03:43.346447 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:03:43.349176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:43.359353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:03:43.380227 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:03:43.383014 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:03:43.385269 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:03:43.413417 systemd-journald[1545]: Time spent on flushing to /var/log/journal/ec285b710265e54650154ac64e2be9c5 is 123.331ms for 961 entries. Jan 29 12:03:43.413417 systemd-journald[1545]: System Journal (/var/log/journal/ec285b710265e54650154ac64e2be9c5) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:03:43.589014 systemd-journald[1545]: Received client request to flush runtime journal. Jan 29 12:03:43.594074 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 12:03:43.594446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:03:43.430451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:03:43.432016 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:03:43.448355 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:03:43.451183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:43.465440 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:03:43.468880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:43.547286 systemd-tmpfiles[1581]: ACLs are not supported, ignoring. Jan 29 12:03:43.547311 systemd-tmpfiles[1581]: ACLs are not supported, ignoring. Jan 29 12:03:43.547740 udevadm[1603]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:03:43.566223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:43.583633 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:03:43.589969 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:03:43.591271 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:03:43.611196 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:03:43.639192 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 12:03:43.754565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:03:43.775461 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 12:03:43.775470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:43.822370 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 29 12:03:43.822400 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 29 12:03:43.833624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:43.850389 kernel: loop3: detected capacity change from 0 to 61336 Jan 29 12:03:43.904048 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 12:03:43.947064 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 12:03:44.014059 kernel: loop6: detected capacity change from 0 to 218376 Jan 29 12:03:44.081868 kernel: loop7: detected capacity change from 0 to 61336 Jan 29 12:03:44.118544 (sd-merge)[1621]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 12:03:44.120427 (sd-merge)[1621]: Merged extensions into '/usr'. Jan 29 12:03:44.132429 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:03:44.132450 systemd[1]: Reloading... Jan 29 12:03:44.295068 zram_generator::config[1645]: No configuration found. Jan 29 12:03:44.683934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:44.845825 systemd[1]: Reloading finished in 712 ms. Jan 29 12:03:44.902069 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:03:44.912279 systemd[1]: Starting ensure-sysext.service... Jan 29 12:03:44.926392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:44.964239 systemd[1]: Reloading requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:03:44.964266 systemd[1]: Reloading... Jan 29 12:03:44.975848 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:03:44.976485 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:03:44.977974 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:03:44.978431 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 29 12:03:44.978525 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 29 12:03:45.007986 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:45.008006 systemd-tmpfiles[1696]: Skipping /boot Jan 29 12:03:45.074291 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:45.074307 systemd-tmpfiles[1696]: Skipping /boot Jan 29 12:03:45.115244 ldconfig[1590]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:03:45.196058 zram_generator::config[1727]: No configuration found. Jan 29 12:03:45.435815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:45.527573 systemd[1]: Reloading finished in 558 ms. Jan 29 12:03:45.547956 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:03:45.550956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:03:45.557630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:45.592361 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:45.613322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:03:45.643143 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:03:45.655174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:45.675135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:45.684803 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:03:45.714279 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:03:45.727629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.728183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:45.739247 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:45.750148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:45.755727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:45.757149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:45.757339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.762845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.764256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:45.764851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:45.765005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.778625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.778981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:45.789386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:45.791572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:45.791871 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:03:45.794280 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.811015 systemd[1]: Finished ensure-sysext.service. Jan 29 12:03:45.820837 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:03:45.823689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:45.823881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:45.830157 systemd-udevd[1787]: Using default interface naming scheme 'v255'. Jan 29 12:03:45.837627 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:45.837860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:45.841126 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:45.850364 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:03:45.853103 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:03:45.854940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:45.855315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:45.859629 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:45.860183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:45.872822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:45.886937 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:03:45.896100 augenrules[1815]: No rules Jan 29 12:03:45.900456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:45.913995 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:03:45.923557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:45.935258 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:45.943678 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:03:45.945697 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:03:46.074441 systemd-resolved[1783]: Positive Trust Anchors: Jan 29 12:03:46.074804 systemd-resolved[1783]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:46.075056 systemd-resolved[1783]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:46.110360 systemd-resolved[1783]: Defaulting to hostname 'linux'. Jan 29 12:03:46.115645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:46.122396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:46.133663 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:03:46.140862 systemd-networkd[1826]: lo: Link UP Jan 29 12:03:46.140872 systemd-networkd[1826]: lo: Gained carrier Jan 29 12:03:46.142630 systemd-networkd[1826]: Enumeration completed Jan 29 12:03:46.143153 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:46.145227 systemd[1]: Reached target network.target - Network. Jan 29 12:03:46.153742 (udev-worker)[1832]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:46.154264 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:03:46.251561 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:46.251578 systemd-networkd[1826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:46.254825 systemd-networkd[1826]: eth0: Link UP Jan 29 12:03:46.256956 systemd-networkd[1826]: eth0: Gained carrier Jan 29 12:03:46.256992 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:46.268356 systemd-networkd[1826]: eth0: DHCPv4 address 172.31.23.23/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:03:46.282054 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:03:46.286087 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 12:03:46.304093 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:03:46.304132 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 12:03:46.304157 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:03:46.304180 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 12:03:46.337532 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1839) Jan 29 12:03:46.346389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:46.425073 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:03:46.517232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:03:46.618237 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:03:46.620705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:46.633990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:03:46.648486 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:03:46.676057 lvm[1942]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:46.688212 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:03:46.718729 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:03:46.724815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:46.729780 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:46.733659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:03:46.735147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:03:46.748178 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:03:46.756685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:03:46.765846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:03:46.775787 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:03:46.775836 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:46.779808 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:46.783562 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:03:46.788290 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:03:46.795188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:03:46.798186 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:03:46.800989 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:03:46.803216 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:46.804522 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:46.806033 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:46.806076 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:46.816173 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:03:46.827778 lvm[1949]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:46.829358 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:03:46.836290 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:03:46.854237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:03:46.874311 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:03:46.881667 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:03:46.889682 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:03:46.892965 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 12:03:46.901691 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:03:46.910489 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 12:03:46.922079 jq[1953]: false Jan 29 12:03:46.920711 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:03:46.926901 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:03:46.938269 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:03:46.940258 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:03:46.941227 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:03:46.948378 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:03:46.951581 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:03:46.954920 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:03:46.967926 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:03:46.969273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:03:47.048303 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:03:47.048664 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:03:47.050839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:03:47.051817 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:03:47.072962 dbus-daemon[1952]: [system] SELinux support is enabled Jan 29 12:03:47.074199 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:03:47.079938 jq[1966]: true Jan 29 12:03:47.081713 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:03:47.081768 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:03:47.084085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:03:47.084117 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:03:47.099268 extend-filesystems[1954]: Found loop4 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found loop5 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found loop6 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found loop7 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p1 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p2 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p3 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found usr Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p4 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p6 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p7 Jan 29 12:03:47.099268 extend-filesystems[1954]: Found nvme0n1p9 Jan 29 12:03:47.099268 extend-filesystems[1954]: Checking size of /dev/nvme0n1p9 Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: ---------------------------------------------------- Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: corporation. Support and training for ntp-4 are Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: available at https://www.nwtime.org/support Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: ---------------------------------------------------- Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: proto: precision = 0.102 usec (-23) Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: basedate set to 2025-01-17 Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:47.157961 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:47.139234 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:03:47.115210 ntpd[1956]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:03:47.115241 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listen normally on 3 eth0 172.31.23.23:123 Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: bind(21) AF_INET6 fe80::480:9bff:fef6:bd43%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: unable to create socket on eth0 (5) for fe80::480:9bff:fef6:bd43%2#123 Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: failed to init interface for address fe80::480:9bff:fef6:bd43%2 Jan 29 12:03:47.173127 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: Listening on routing socket on fd #21 for interface updates Jan 29 12:03:47.115252 ntpd[1956]: ---------------------------------------------------- Jan 29 12:03:47.115262 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:47.115271 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:47.115281 ntpd[1956]: corporation. Support and training for ntp-4 are Jan 29 12:03:47.115291 ntpd[1956]: available at https://www.nwtime.org/support Jan 29 12:03:47.115300 ntpd[1956]: ---------------------------------------------------- Jan 29 12:03:47.131787 ntpd[1956]: proto: precision = 0.102 usec (-23) Jan 29 12:03:47.134583 ntpd[1956]: basedate set to 2025-01-17 Jan 29 12:03:47.134607 ntpd[1956]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:47.139623 dbus-daemon[1952]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1826 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:47.157544 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:47.157610 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:47.163679 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:47.168380 ntpd[1956]: Listen normally on 3 eth0 172.31.23.23:123 Jan 29 12:03:47.168597 ntpd[1956]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:47.168663 ntpd[1956]: bind(21) AF_INET6 fe80::480:9bff:fef6:bd43%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:47.168695 ntpd[1956]: unable to create socket on eth0 (5) for fe80::480:9bff:fef6:bd43%2#123 Jan 29 12:03:47.168789 ntpd[1956]: failed to init interface for address fe80::480:9bff:fef6:bd43%2 Jan 29 12:03:47.168833 ntpd[1956]: Listening on routing socket on fd #21 for interface updates Jan 29 12:03:47.179503 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:47.181461 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:03:47.182381 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:47.182381 ntpd[1956]: 29 Jan 12:03:47 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:47.179552 ntpd[1956]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:47.186955 update_engine[1965]: I20250129 12:03:47.186506 1965 main.cc:92] Flatcar Update Engine starting Jan 29 12:03:47.187301 coreos-metadata[1951]: Jan 29 12:03:47.187 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:03:47.188182 coreos-metadata[1951]: Jan 29 12:03:47.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 12:03:47.189604 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:03:47.190103 update_engine[1965]: I20250129 12:03:47.189913 1965 update_check_scheduler.cc:74] Next update check in 11m52s Jan 29 12:03:47.203383 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:03:47.206595 coreos-metadata[1951]: Jan 29 12:03:47.206 INFO Fetch successful Jan 29 12:03:47.206595 coreos-metadata[1951]: Jan 29 12:03:47.206 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 12:03:47.216267 tar[1980]: linux-amd64/LICENSE Jan 29 12:03:47.216267 tar[1980]: linux-amd64/helm Jan 29 12:03:47.216761 coreos-metadata[1951]: Jan 29 12:03:47.215 INFO Fetch successful Jan 29 12:03:47.216761 coreos-metadata[1951]: Jan 29 12:03:47.216 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 12:03:47.221718 jq[1987]: true Jan 29 12:03:47.222191 coreos-metadata[1951]: Jan 29 12:03:47.221 INFO Fetch successful Jan 29 12:03:47.222191 coreos-metadata[1951]: Jan 29 12:03:47.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 12:03:47.226180 coreos-metadata[1951]: Jan 29 12:03:47.223 INFO Fetch successful Jan 29 12:03:47.226180 coreos-metadata[1951]: Jan 29 12:03:47.223 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 12:03:47.227265 coreos-metadata[1951]: Jan 29 12:03:47.227 INFO Fetch failed with 404: resource not found Jan 29 12:03:47.227265 coreos-metadata[1951]: Jan 29 12:03:47.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 12:03:47.235195 coreos-metadata[1951]: Jan 29 12:03:47.230 INFO Fetch successful Jan 29 12:03:47.235195 coreos-metadata[1951]: Jan 29 12:03:47.230 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 12:03:47.235195 coreos-metadata[1951]: Jan 29 12:03:47.235 INFO Fetch successful Jan 29 12:03:47.238540 coreos-metadata[1951]: Jan 29 12:03:47.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 12:03:47.240672 coreos-metadata[1951]: Jan 29 12:03:47.240 INFO Fetch successful Jan 29 12:03:47.240672 coreos-metadata[1951]: Jan 29 12:03:47.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 12:03:47.253540 coreos-metadata[1951]: Jan 29 12:03:47.251 INFO Fetch successful Jan 29 12:03:47.253540 coreos-metadata[1951]: Jan 29 12:03:47.253 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 12:03:47.253722 extend-filesystems[1954]: Resized partition /dev/nvme0n1p9 Jan 29 12:03:47.267617 extend-filesystems[2004]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:03:47.280211 coreos-metadata[1951]: Jan 29 12:03:47.269 INFO Fetch successful Jan 29 12:03:47.300097 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 12:03:47.339399 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 12:03:47.433052 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:03:47.434736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:03:47.487342 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 12:03:47.497877 extend-filesystems[2004]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 12:03:47.497877 extend-filesystems[2004]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:03:47.497877 extend-filesystems[2004]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 12:03:47.503266 extend-filesystems[1954]: Resized filesystem in /dev/nvme0n1p9 Jan 29 12:03:47.520893 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:03:47.524721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:03:47.530203 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:47.570304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:03:47.588266 systemd[1]: Starting sshkeys.service... Jan 29 12:03:47.598056 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1835) Jan 29 12:03:47.620134 systemd-logind[1964]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:03:47.620167 systemd-logind[1964]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 12:03:47.620188 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:03:47.646411 systemd-logind[1964]: New seat seat0. Jan 29 12:03:47.652820 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:03:47.690680 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:03:47.708822 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:03:47.717666 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:03:47.718067 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:03:47.735121 dbus-daemon[1952]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1996 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:47.746339 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:03:47.810014 polkitd[2073]: Started polkitd version 121 Jan 29 12:03:47.819301 polkitd[2073]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:03:47.819384 polkitd[2073]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:03:47.831251 systemd-networkd[1826]: eth0: Gained IPv6LL Jan 29 12:03:47.838048 polkitd[2073]: Finished loading, compiling and executing 2 rules Jan 29 12:03:47.874345 dbus-daemon[1952]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:03:47.874547 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:03:47.883167 polkitd[2073]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:03:47.898097 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:03:47.901106 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:03:47.909673 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 12:03:47.920524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:47.938593 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:03:48.089044 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:03:48.090447 systemd-hostnamed[1996]: Hostname set to (transient) Jan 29 12:03:48.108372 systemd-resolved[1783]: System hostname changed to 'ip-172-31-23-23'. Jan 29 12:03:48.179059 coreos-metadata[2072]: Jan 29 12:03:48.175 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:03:48.186629 coreos-metadata[2072]: Jan 29 12:03:48.185 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 12:03:48.186629 coreos-metadata[2072]: Jan 29 12:03:48.186 INFO Fetch successful Jan 29 12:03:48.186629 coreos-metadata[2072]: Jan 29 12:03:48.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:03:48.194500 coreos-metadata[2072]: Jan 29 12:03:48.193 INFO Fetch successful Jan 29 12:03:48.196055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:03:48.209206 unknown[2072]: wrote ssh authorized keys file for user: core Jan 29 12:03:48.278140 update-ssh-keys[2155]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:48.280988 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:03:48.294228 systemd[1]: Finished sshkeys.service. Jan 29 12:03:48.303302 amazon-ssm-agent[2113]: Initializing new seelog logger Jan 29 12:03:48.309056 amazon-ssm-agent[2113]: New Seelog Logger Creation Complete Jan 29 12:03:48.309056 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.309056 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.309056 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 processing appconfig overrides Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 processing appconfig overrides Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 processing appconfig overrides Jan 29 12:03:48.316849 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO Proxy environment variables: Jan 29 12:03:48.343902 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.343902 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:48.343902 amazon-ssm-agent[2113]: 2025/01/29 12:03:48 processing appconfig overrides Jan 29 12:03:48.400463 containerd[1982]: time="2025-01-29T12:03:48.399598133Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:03:48.424048 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO https_proxy: Jan 29 12:03:48.534074 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO http_proxy: Jan 29 12:03:48.573856 containerd[1982]: time="2025-01-29T12:03:48.572521917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.580132 containerd[1982]: time="2025-01-29T12:03:48.580072289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:48.581099 containerd[1982]: time="2025-01-29T12:03:48.581069844Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:03:48.581228 containerd[1982]: time="2025-01-29T12:03:48.581192531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:03:48.581467 containerd[1982]: time="2025-01-29T12:03:48.581450456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:03:48.582704 containerd[1982]: time="2025-01-29T12:03:48.582478550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.582704 containerd[1982]: time="2025-01-29T12:03:48.582598673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:48.582704 containerd[1982]: time="2025-01-29T12:03:48.582622594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.583547 containerd[1982]: time="2025-01-29T12:03:48.583505084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:48.589424 containerd[1982]: time="2025-01-29T12:03:48.589366137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.592480 containerd[1982]: time="2025-01-29T12:03:48.591410829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:48.592480 containerd[1982]: time="2025-01-29T12:03:48.591461723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.592480 containerd[1982]: time="2025-01-29T12:03:48.591713473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.592480 containerd[1982]: time="2025-01-29T12:03:48.592000055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:48.596262 containerd[1982]: time="2025-01-29T12:03:48.596165057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:48.597095 containerd[1982]: time="2025-01-29T12:03:48.597062101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:03:48.597442 containerd[1982]: time="2025-01-29T12:03:48.597420141Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:03:48.600426 containerd[1982]: time="2025-01-29T12:03:48.598805904Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:03:48.611219 containerd[1982]: time="2025-01-29T12:03:48.611171609Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613324395Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613363350Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613387467Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613410912Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613596371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:03:48.614185 containerd[1982]: time="2025-01-29T12:03:48.613992323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615185821Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615223303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615252968Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615275365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615296098Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615314998Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615336174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615357923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615378940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615398200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615416169Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615445684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615466954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.615725 containerd[1982]: time="2025-01-29T12:03:48.615485534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615505742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615524058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615545828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615570576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615590903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615610181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615631355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615649004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615667620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.616289 containerd[1982]: time="2025-01-29T12:03:48.615686532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619053855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619118541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619142384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619160474Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619247243Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619355083Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619374957Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619396172Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619411903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619434622Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619450419Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:03:48.622042 containerd[1982]: time="2025-01-29T12:03:48.619465885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:03:48.622591 containerd[1982]: time="2025-01-29T12:03:48.619869464Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:03:48.622591 containerd[1982]: time="2025-01-29T12:03:48.619957418Z" level=info msg="Connect containerd service" Jan 29 12:03:48.622591 containerd[1982]: time="2025-01-29T12:03:48.620017071Z" level=info msg="using legacy CRI server" Jan 29 12:03:48.622591 containerd[1982]: time="2025-01-29T12:03:48.620041124Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:03:48.622591 containerd[1982]: time="2025-01-29T12:03:48.620226682Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.624412494Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.624894031Z" level=info msg="Start subscribing containerd event" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.624957171Z" level=info msg="Start recovering state" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.626477590Z" level=info msg="Start event monitor" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.626505738Z" level=info msg="Start snapshots syncer" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.626522790Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:03:48.626886 containerd[1982]: time="2025-01-29T12:03:48.626533770Z" level=info msg="Start streaming server" Jan 29 12:03:48.629423 containerd[1982]: time="2025-01-29T12:03:48.627944499Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:03:48.631352 containerd[1982]: time="2025-01-29T12:03:48.629919762Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:03:48.633653 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:03:48.633958 containerd[1982]: time="2025-01-29T12:03:48.633929808Z" level=info msg="containerd successfully booted in 0.246797s" Jan 29 12:03:48.634402 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO no_proxy: Jan 29 12:03:48.732861 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO Checking if agent identity type OnPrem can be assumed Jan 29 12:03:48.831169 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO Checking if agent identity type EC2 can be assumed Jan 29 12:03:48.872066 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO Agent will take identity from EC2 Jan 29 12:03:48.872239 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:48.872362 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:48.872436 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:48.872595 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 12:03:48.872669 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 12:03:48.872740 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 12:03:48.872925 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 12:03:48.873003 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [Registrar] Starting registrar module Jan 29 12:03:48.873116 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 12:03:48.873240 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [EC2Identity] EC2 registration was successful. Jan 29 12:03:48.873481 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [CredentialRefresher] credentialRefresher has started Jan 29 12:03:48.873561 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 12:03:48.873636 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 12:03:48.930212 amazon-ssm-agent[2113]: 2025-01-29 12:03:48 INFO [CredentialRefresher] Next credential rotation will be in 30.80829243026667 minutes Jan 29 12:03:49.161176 tar[1980]: linux-amd64/README.md Jan 29 12:03:49.184925 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:03:49.553713 sshd_keygen[2002]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:03:49.594431 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:03:49.610780 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:03:49.626095 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:03:49.626323 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:03:49.637501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:03:49.685169 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:03:49.698509 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:03:49.701518 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:03:49.703423 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:03:49.895971 amazon-ssm-agent[2113]: 2025-01-29 12:03:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 12:03:49.999058 amazon-ssm-agent[2113]: 2025-01-29 12:03:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2196) started Jan 29 12:03:50.098690 amazon-ssm-agent[2113]: 2025-01-29 12:03:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 12:03:50.115725 ntpd[1956]: Listen normally on 6 eth0 [fe80::480:9bff:fef6:bd43%2]:123 Jan 29 12:03:50.116239 ntpd[1956]: 29 Jan 12:03:50 ntpd[1956]: Listen normally on 6 eth0 [fe80::480:9bff:fef6:bd43%2]:123 Jan 29 12:03:50.226723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:50.230386 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:03:50.233458 systemd[1]: Startup finished in 745ms (kernel) + 8.102s (initrd) + 8.400s (userspace) = 17.248s. Jan 29 12:03:50.419550 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:51.453020 kubelet[2211]: E0129 12:03:51.452964 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:51.456021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:51.456242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:51.456674 systemd[1]: kubelet.service: Consumed 1.134s CPU time. Jan 29 12:03:54.495392 systemd-resolved[1783]: Clock change detected. Flushing caches. Jan 29 12:03:57.094554 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:03:57.101651 systemd[1]: Started sshd@0-172.31.23.23:22-139.178.68.195:46642.service - OpenSSH per-connection server daemon (139.178.68.195:46642). Jan 29 12:03:57.292836 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 46642 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:57.298971 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:57.311356 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:03:57.319785 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:03:57.323809 systemd-logind[1964]: New session 1 of user core. Jan 29 12:03:57.342573 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:03:57.351654 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:03:57.365974 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:03:57.527123 systemd[2227]: Queued start job for default target default.target. Jan 29 12:03:57.540358 systemd[2227]: Created slice app.slice - User Application Slice. Jan 29 12:03:57.540409 systemd[2227]: Reached target paths.target - Paths. Jan 29 12:03:57.540430 systemd[2227]: Reached target timers.target - Timers. Jan 29 12:03:57.546985 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:03:57.563690 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:03:57.563850 systemd[2227]: Reached target sockets.target - Sockets. Jan 29 12:03:57.563872 systemd[2227]: Reached target basic.target - Basic System. Jan 29 12:03:57.563927 systemd[2227]: Reached target default.target - Main User Target. Jan 29 12:03:57.563968 systemd[2227]: Startup finished in 189ms. Jan 29 12:03:57.564235 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:03:57.575723 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:03:57.734710 systemd[1]: Started sshd@1-172.31.23.23:22-139.178.68.195:46656.service - OpenSSH per-connection server daemon (139.178.68.195:46656). Jan 29 12:03:57.946645 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 46656 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:57.948617 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:57.967401 systemd-logind[1964]: New session 2 of user core. Jan 29 12:03:57.973754 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:03:58.097744 sshd[2238]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:58.102013 systemd[1]: sshd@1-172.31.23.23:22-139.178.68.195:46656.service: Deactivated successfully. Jan 29 12:03:58.104065 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:03:58.104800 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:03:58.106263 systemd-logind[1964]: Removed session 2. Jan 29 12:03:58.132972 systemd[1]: Started sshd@2-172.31.23.23:22-139.178.68.195:46668.service - OpenSSH per-connection server daemon (139.178.68.195:46668). Jan 29 12:03:58.312974 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 46668 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:58.315451 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:58.323562 systemd-logind[1964]: New session 3 of user core. Jan 29 12:03:58.330801 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:03:58.456742 sshd[2245]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:58.460886 systemd[1]: sshd@2-172.31.23.23:22-139.178.68.195:46668.service: Deactivated successfully. Jan 29 12:03:58.464290 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:03:58.466314 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:03:58.468713 systemd-logind[1964]: Removed session 3. Jan 29 12:03:58.494178 systemd[1]: Started sshd@3-172.31.23.23:22-139.178.68.195:46676.service - OpenSSH per-connection server daemon (139.178.68.195:46676). Jan 29 12:03:58.675811 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 46676 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:58.678808 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:58.693764 systemd-logind[1964]: New session 4 of user core. Jan 29 12:03:58.706726 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:03:58.839060 sshd[2252]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:58.851036 systemd[1]: sshd@3-172.31.23.23:22-139.178.68.195:46676.service: Deactivated successfully. Jan 29 12:03:58.854361 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:03:58.857227 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:03:58.873064 systemd[1]: Started sshd@4-172.31.23.23:22-139.178.68.195:46684.service - OpenSSH per-connection server daemon (139.178.68.195:46684). Jan 29 12:03:58.875696 systemd-logind[1964]: Removed session 4. Jan 29 12:03:59.073948 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 46684 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:59.076016 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:59.099299 systemd-logind[1964]: New session 5 of user core. Jan 29 12:03:59.106721 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:03:59.254008 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:03:59.254611 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:59.274143 sudo[2262]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:59.298372 sshd[2259]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:59.306364 systemd[1]: sshd@4-172.31.23.23:22-139.178.68.195:46684.service: Deactivated successfully. Jan 29 12:03:59.310155 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:03:59.312704 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:03:59.314752 systemd-logind[1964]: Removed session 5. Jan 29 12:03:59.350252 systemd[1]: Started sshd@5-172.31.23.23:22-139.178.68.195:46700.service - OpenSSH per-connection server daemon (139.178.68.195:46700). Jan 29 12:03:59.539284 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 46700 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:03:59.541392 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:59.548677 systemd-logind[1964]: New session 6 of user core. Jan 29 12:03:59.558842 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:03:59.663018 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:03:59.663876 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:59.669371 sudo[2271]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:59.682286 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:03:59.683071 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:59.704873 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:59.708822 auditctl[2274]: No rules Jan 29 12:03:59.710547 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:03:59.710789 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:59.719931 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:59.770689 augenrules[2292]: No rules Jan 29 12:03:59.774208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:59.775701 sudo[2270]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:59.798908 sshd[2267]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:59.803609 systemd[1]: sshd@5-172.31.23.23:22-139.178.68.195:46700.service: Deactivated successfully. Jan 29 12:03:59.806754 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:03:59.808631 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:03:59.809878 systemd-logind[1964]: Removed session 6. Jan 29 12:03:59.846057 systemd[1]: Started sshd@6-172.31.23.23:22-139.178.68.195:46702.service - OpenSSH per-connection server daemon (139.178.68.195:46702). Jan 29 12:04:00.024205 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 46702 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:04:00.040238 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:00.077758 systemd-logind[1964]: New session 7 of user core. Jan 29 12:04:00.089191 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:04:00.233248 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:04:00.233846 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:04:01.012877 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:04:01.013057 (dockerd)[2318]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:04:01.762891 dockerd[2318]: time="2025-01-29T12:04:01.762122467Z" level=info msg="Starting up" Jan 29 12:04:02.003813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:02.016820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:02.188707 dockerd[2318]: time="2025-01-29T12:04:02.188295313Z" level=info msg="Loading containers: start." Jan 29 12:04:02.468459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:02.483129 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:02.486361 kernel: Initializing XFRM netlink socket Jan 29 12:04:02.576197 (udev-worker)[2344]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:02.640051 kubelet[2391]: E0129 12:04:02.640002 2391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:02.645684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:02.646215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:02.678587 systemd-networkd[1826]: docker0: Link UP Jan 29 12:04:02.704235 dockerd[2318]: time="2025-01-29T12:04:02.704081491Z" level=info msg="Loading containers: done." Jan 29 12:04:02.751002 dockerd[2318]: time="2025-01-29T12:04:02.750875323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:04:02.751188 dockerd[2318]: time="2025-01-29T12:04:02.751000756Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:04:02.751188 dockerd[2318]: time="2025-01-29T12:04:02.751141394Z" level=info msg="Daemon has completed initialization" Jan 29 12:04:02.828760 dockerd[2318]: time="2025-01-29T12:04:02.824846059Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:04:02.828281 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:04:03.880090 containerd[1982]: time="2025-01-29T12:04:03.880041151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 12:04:04.510796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376610665.mount: Deactivated successfully. Jan 29 12:04:07.145604 containerd[1982]: time="2025-01-29T12:04:07.145543074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.147149 containerd[1982]: time="2025-01-29T12:04:07.147096559Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 12:04:07.148673 containerd[1982]: time="2025-01-29T12:04:07.148225667Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.152198 containerd[1982]: time="2025-01-29T12:04:07.152153381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.153555 containerd[1982]: time="2025-01-29T12:04:07.153509372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 3.273423214s" Jan 29 12:04:07.153819 containerd[1982]: time="2025-01-29T12:04:07.153794583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 12:04:07.164376 containerd[1982]: time="2025-01-29T12:04:07.164332059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 12:04:10.531916 containerd[1982]: time="2025-01-29T12:04:10.531862512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:10.533730 containerd[1982]: time="2025-01-29T12:04:10.533675006Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 12:04:10.539240 containerd[1982]: time="2025-01-29T12:04:10.537653626Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:10.541360 containerd[1982]: time="2025-01-29T12:04:10.541301927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:10.542782 containerd[1982]: time="2025-01-29T12:04:10.542738236Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 3.378354143s" Jan 29 12:04:10.542956 containerd[1982]: time="2025-01-29T12:04:10.542933825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 12:04:10.544145 containerd[1982]: time="2025-01-29T12:04:10.544107031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 12:04:12.682668 containerd[1982]: time="2025-01-29T12:04:12.682620691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:12.684788 containerd[1982]: time="2025-01-29T12:04:12.684721972Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 12:04:12.686841 containerd[1982]: time="2025-01-29T12:04:12.686775908Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:12.691411 containerd[1982]: time="2025-01-29T12:04:12.690985255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:12.692185 containerd[1982]: time="2025-01-29T12:04:12.692139452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.147988957s" Jan 29 12:04:12.692283 containerd[1982]: time="2025-01-29T12:04:12.692190018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 12:04:12.693411 containerd[1982]: time="2025-01-29T12:04:12.693379217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 12:04:12.751980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:04:12.761809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:13.024147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:13.038299 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:13.090029 kubelet[2542]: E0129 12:04:13.089980 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:13.092844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:13.093036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:14.459651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942813622.mount: Deactivated successfully. Jan 29 12:04:15.273195 containerd[1982]: time="2025-01-29T12:04:15.273135200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:15.274510 containerd[1982]: time="2025-01-29T12:04:15.274387276Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 12:04:15.275755 containerd[1982]: time="2025-01-29T12:04:15.275638249Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:15.278499 containerd[1982]: time="2025-01-29T12:04:15.278262606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:15.279925 containerd[1982]: time="2025-01-29T12:04:15.279198769Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.585779007s" Jan 29 12:04:15.279925 containerd[1982]: time="2025-01-29T12:04:15.279241018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 12:04:15.280255 containerd[1982]: time="2025-01-29T12:04:15.280225296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 12:04:15.908898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174830788.mount: Deactivated successfully. Jan 29 12:04:17.487207 containerd[1982]: time="2025-01-29T12:04:17.487043966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:17.489163 containerd[1982]: time="2025-01-29T12:04:17.489089624Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 12:04:17.491634 containerd[1982]: time="2025-01-29T12:04:17.491568338Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:17.496094 containerd[1982]: time="2025-01-29T12:04:17.496028008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:17.497513 containerd[1982]: time="2025-01-29T12:04:17.497286996Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.217024018s" Jan 29 12:04:17.497774 containerd[1982]: time="2025-01-29T12:04:17.497653388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 12:04:17.499015 containerd[1982]: time="2025-01-29T12:04:17.498939435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:04:18.069830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502586609.mount: Deactivated successfully. Jan 29 12:04:18.080313 containerd[1982]: time="2025-01-29T12:04:18.080213535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:18.083879 containerd[1982]: time="2025-01-29T12:04:18.083820914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 12:04:18.088520 containerd[1982]: time="2025-01-29T12:04:18.086109882Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:18.091764 containerd[1982]: time="2025-01-29T12:04:18.091585839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:18.092982 containerd[1982]: time="2025-01-29T12:04:18.092939214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.932788ms" Jan 29 12:04:18.092982 containerd[1982]: time="2025-01-29T12:04:18.092978768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:04:18.094022 containerd[1982]: time="2025-01-29T12:04:18.093994090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 12:04:18.507005 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:04:18.784978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667019819.mount: Deactivated successfully. Jan 29 12:04:21.354997 containerd[1982]: time="2025-01-29T12:04:21.354941979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:21.356681 containerd[1982]: time="2025-01-29T12:04:21.356626439Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 12:04:21.357412 containerd[1982]: time="2025-01-29T12:04:21.357339861Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:21.360761 containerd[1982]: time="2025-01-29T12:04:21.360702286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:21.369349 containerd[1982]: time="2025-01-29T12:04:21.369289576Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.275251407s" Jan 29 12:04:21.370975 containerd[1982]: time="2025-01-29T12:04:21.369710834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 12:04:23.252678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:04:23.263640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:23.590781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:23.601227 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:23.706071 kubelet[2698]: E0129 12:04:23.706022 2698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:23.710856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:23.711989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:25.066286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:25.073954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:25.135555 systemd[1]: Reloading requested from client PID 2713 ('systemctl') (unit session-7.scope)... Jan 29 12:04:25.135580 systemd[1]: Reloading... Jan 29 12:04:25.319517 zram_generator::config[2751]: No configuration found. Jan 29 12:04:25.535346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:25.696440 systemd[1]: Reloading finished in 560 ms. Jan 29 12:04:25.760779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:25.769793 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:25.771379 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:04:25.771652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:25.778129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:26.595150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:26.608329 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:26.682505 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:26.682505 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:26.682505 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:26.684529 kubelet[2815]: I0129 12:04:26.684435 2815 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:27.337676 kubelet[2815]: I0129 12:04:27.337625 2815 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:04:27.337676 kubelet[2815]: I0129 12:04:27.337662 2815 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:27.338920 kubelet[2815]: I0129 12:04:27.338891 2815 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:04:27.432199 kubelet[2815]: I0129 12:04:27.431830 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:27.439879 kubelet[2815]: E0129 12:04:27.439812 2815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:27.462086 kubelet[2815]: E0129 12:04:27.462042 2815 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:27.462086 kubelet[2815]: I0129 12:04:27.462082 2815 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:27.467623 kubelet[2815]: I0129 12:04:27.467590 2815 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:27.475945 kubelet[2815]: I0129 12:04:27.475809 2815 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:27.476154 kubelet[2815]: I0129 12:04:27.475939 2815 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-23","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:27.476291 kubelet[2815]: I0129 12:04:27.476157 2815 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:27.476291 kubelet[2815]: I0129 12:04:27.476175 2815 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:04:27.476389 kubelet[2815]: I0129 12:04:27.476340 2815 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:27.481606 kubelet[2815]: I0129 12:04:27.481568 2815 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:04:27.481606 kubelet[2815]: I0129 12:04:27.481610 2815 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:27.482909 kubelet[2815]: I0129 12:04:27.481645 2815 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:04:27.482909 kubelet[2815]: I0129 12:04:27.481659 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:27.497128 kubelet[2815]: W0129 12:04:27.497067 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:27.497871 kubelet[2815]: E0129 12:04:27.497840 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:27.498010 kubelet[2815]: I0129 12:04:27.497989 2815 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:27.503923 kubelet[2815]: W0129 12:04:27.503747 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-23&limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:27.503923 kubelet[2815]: E0129 12:04:27.503833 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-23&limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:27.506515 kubelet[2815]: I0129 12:04:27.504707 2815 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:27.506515 kubelet[2815]: W0129 12:04:27.505674 2815 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:04:27.508122 kubelet[2815]: I0129 12:04:27.508085 2815 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:04:27.508246 kubelet[2815]: I0129 12:04:27.508140 2815 server.go:1287] "Started kubelet" Jan 29 12:04:27.513646 kubelet[2815]: I0129 12:04:27.511541 2815 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:27.517002 kubelet[2815]: I0129 12:04:27.516927 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:27.518843 kubelet[2815]: I0129 12:04:27.517311 2815 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:27.519938 kubelet[2815]: I0129 12:04:27.519709 2815 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:04:27.525793 kubelet[2815]: E0129 12:04:27.520531 2815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.23:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-23.181f2847365c392a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-23,UID:ip-172-31-23-23,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-23,},FirstTimestamp:2025-01-29 12:04:27.508103466 +0000 UTC m=+0.895228647,LastTimestamp:2025-01-29 12:04:27.508103466 +0000 UTC m=+0.895228647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-23,}" Jan 29 12:04:27.527328 kubelet[2815]: I0129 12:04:27.527300 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:27.527784 kubelet[2815]: I0129 12:04:27.527690 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:27.531784 kubelet[2815]: E0129 12:04:27.531135 2815 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-23-23\" not found" Jan 29 12:04:27.531784 kubelet[2815]: I0129 12:04:27.531176 2815 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:04:27.535643 kubelet[2815]: I0129 12:04:27.534022 2815 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:04:27.535643 kubelet[2815]: I0129 12:04:27.534092 2815 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:27.535643 kubelet[2815]: W0129 12:04:27.534638 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:27.535643 kubelet[2815]: E0129 12:04:27.534707 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:27.535643 kubelet[2815]: I0129 12:04:27.535146 2815 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:27.537618 kubelet[2815]: I0129 12:04:27.536710 2815 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:27.539549 kubelet[2815]: E0129 12:04:27.539470 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": dial tcp 172.31.23.23:6443: connect: connection refused" interval="200ms" Jan 29 12:04:27.543507 kubelet[2815]: I0129 12:04:27.542092 2815 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:27.565388 kubelet[2815]: E0129 12:04:27.565352 2815 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:27.589771 kubelet[2815]: I0129 12:04:27.589631 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:27.597387 kubelet[2815]: I0129 12:04:27.596075 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:27.597387 kubelet[2815]: I0129 12:04:27.596114 2815 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:04:27.599720 kubelet[2815]: I0129 12:04:27.599695 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:04:27.599720 kubelet[2815]: I0129 12:04:27.599717 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:27.603453 kubelet[2815]: I0129 12:04:27.599759 2815 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:27.603453 kubelet[2815]: I0129 12:04:27.600845 2815 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:04:27.603453 kubelet[2815]: I0129 12:04:27.600864 2815 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:04:27.603453 kubelet[2815]: W0129 12:04:27.601467 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:27.603453 kubelet[2815]: E0129 12:04:27.601518 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:27.603453 kubelet[2815]: E0129 12:04:27.602872 2815 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:27.608647 kubelet[2815]: I0129 12:04:27.606429 2815 policy_none.go:49] "None policy: Start" Jan 29 12:04:27.608647 kubelet[2815]: I0129 12:04:27.606449 2815 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:04:27.608647 kubelet[2815]: I0129 12:04:27.606465 2815 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:27.615516 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:04:27.631580 kubelet[2815]: E0129 12:04:27.631425 2815 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-23-23\" not found" Jan 29 12:04:27.632518 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:04:27.636320 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:04:27.644469 kubelet[2815]: I0129 12:04:27.644443 2815 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:27.645287 kubelet[2815]: I0129 12:04:27.645267 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:27.645585 kubelet[2815]: I0129 12:04:27.645529 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:27.646821 kubelet[2815]: I0129 12:04:27.646355 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:27.648263 kubelet[2815]: E0129 12:04:27.648246 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:04:27.648641 kubelet[2815]: E0129 12:04:27.648620 2815 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-23\" not found" Jan 29 12:04:27.714283 systemd[1]: Created slice kubepods-burstable-pod6f737d8717774f2ff25aa7df574f10b8.slice - libcontainer container kubepods-burstable-pod6f737d8717774f2ff25aa7df574f10b8.slice. Jan 29 12:04:27.727924 kubelet[2815]: E0129 12:04:27.727442 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:27.731548 systemd[1]: Created slice kubepods-burstable-podbb1c025f833866bd82832c3ffed1b68e.slice - libcontainer container kubepods-burstable-podbb1c025f833866bd82832c3ffed1b68e.slice. Jan 29 12:04:27.733920 kubelet[2815]: E0129 12:04:27.733705 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:27.735208 kubelet[2815]: I0129 12:04:27.735179 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-ca-certs\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:27.735346 kubelet[2815]: I0129 12:04:27.735222 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:27.735346 kubelet[2815]: I0129 12:04:27.735255 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:27.735346 kubelet[2815]: I0129 12:04:27.735276 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:27.735346 kubelet[2815]: I0129 12:04:27.735300 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:27.735346 kubelet[2815]: I0129 12:04:27.735324 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dba68a773473aa76831e6089b063cd8f-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-23\" (UID: \"dba68a773473aa76831e6089b063cd8f\") " pod="kube-system/kube-scheduler-ip-172-31-23-23" Jan 29 12:04:27.735598 kubelet[2815]: I0129 12:04:27.735344 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:27.735598 kubelet[2815]: I0129 12:04:27.735367 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:27.735598 kubelet[2815]: I0129 12:04:27.735392 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:27.736388 systemd[1]: Created slice kubepods-burstable-poddba68a773473aa76831e6089b063cd8f.slice - libcontainer container kubepods-burstable-poddba68a773473aa76831e6089b063cd8f.slice. Jan 29 12:04:27.740998 kubelet[2815]: E0129 12:04:27.740949 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": dial tcp 172.31.23.23:6443: connect: connection refused" interval="400ms" Jan 29 12:04:27.742506 kubelet[2815]: E0129 12:04:27.742459 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:27.748308 kubelet[2815]: I0129 12:04:27.748270 2815 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:27.748695 kubelet[2815]: E0129 12:04:27.748657 2815 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.23:6443/api/v1/nodes\": dial tcp 172.31.23.23:6443: connect: connection refused" node="ip-172-31-23-23" Jan 29 12:04:27.951289 kubelet[2815]: I0129 12:04:27.950937 2815 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:27.951446 kubelet[2815]: E0129 12:04:27.951296 2815 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.23:6443/api/v1/nodes\": dial tcp 172.31.23.23:6443: connect: connection refused" node="ip-172-31-23-23" Jan 29 12:04:28.028943 containerd[1982]: time="2025-01-29T12:04:28.028893250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-23,Uid:6f737d8717774f2ff25aa7df574f10b8,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:28.043852 containerd[1982]: time="2025-01-29T12:04:28.043575886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-23,Uid:dba68a773473aa76831e6089b063cd8f,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:28.044036 containerd[1982]: time="2025-01-29T12:04:28.043576083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-23,Uid:bb1c025f833866bd82832c3ffed1b68e,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:28.141566 kubelet[2815]: E0129 12:04:28.141520 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": dial tcp 172.31.23.23:6443: connect: connection refused" interval="800ms" Jan 29 12:04:28.353875 kubelet[2815]: I0129 12:04:28.353823 2815 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:28.354375 kubelet[2815]: E0129 12:04:28.354339 2815 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.23:6443/api/v1/nodes\": dial tcp 172.31.23.23:6443: connect: connection refused" node="ip-172-31-23-23" Jan 29 12:04:28.519025 kubelet[2815]: W0129 12:04:28.518899 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:28.519025 kubelet[2815]: E0129 12:04:28.519020 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28.528973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058346713.mount: Deactivated successfully. Jan 29 12:04:28.542796 containerd[1982]: time="2025-01-29T12:04:28.542730263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:28.545706 containerd[1982]: time="2025-01-29T12:04:28.545649724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:28.545850 containerd[1982]: time="2025-01-29T12:04:28.545768576Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:28.547006 containerd[1982]: time="2025-01-29T12:04:28.546967143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:28.548396 containerd[1982]: time="2025-01-29T12:04:28.548352781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:28.550384 containerd[1982]: time="2025-01-29T12:04:28.550338624Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:28.551973 containerd[1982]: time="2025-01-29T12:04:28.551899205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:04:28.555504 containerd[1982]: time="2025-01-29T12:04:28.554092828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:28.557233 containerd[1982]: time="2025-01-29T12:04:28.557190969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.117871ms" Jan 29 12:04:28.563749 containerd[1982]: time="2025-01-29T12:04:28.562854037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.179494ms" Jan 29 12:04:28.564935 containerd[1982]: time="2025-01-29T12:04:28.564886859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.899051ms" Jan 29 12:04:28.601096 kubelet[2815]: W0129 12:04:28.600920 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:28.601327 kubelet[2815]: E0129 12:04:28.601110 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28.765459 kubelet[2815]: W0129 12:04:28.758397 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-23&limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:28.765459 kubelet[2815]: E0129 12:04:28.762663 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-23&limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28.876516 kubelet[2815]: W0129 12:04:28.875089 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.23:6443: connect: connection refused Jan 29 12:04:28.876516 kubelet[2815]: E0129 12:04:28.875179 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28.944024 kubelet[2815]: E0129 12:04:28.943698 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": dial tcp 172.31.23.23:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:28.946467 containerd[1982]: time="2025-01-29T12:04:28.946201851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:28.951250 containerd[1982]: time="2025-01-29T12:04:28.951107309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:28.951250 containerd[1982]: time="2025-01-29T12:04:28.951164215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:28.952125 containerd[1982]: time="2025-01-29T12:04:28.951313824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:28.986119 containerd[1982]: time="2025-01-29T12:04:28.980961966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:28.986119 containerd[1982]: time="2025-01-29T12:04:28.981533054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:28.986119 containerd[1982]: time="2025-01-29T12:04:28.981558840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:28.986119 containerd[1982]: time="2025-01-29T12:04:28.982102992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:29.011673 containerd[1982]: time="2025-01-29T12:04:29.010575698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:29.011673 containerd[1982]: time="2025-01-29T12:04:29.010719390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:29.011673 containerd[1982]: time="2025-01-29T12:04:29.010759767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:29.011673 containerd[1982]: time="2025-01-29T12:04:29.010861598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:29.040779 systemd[1]: Started cri-containerd-0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa.scope - libcontainer container 0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa. Jan 29 12:04:29.057787 systemd[1]: Started cri-containerd-8cc910fb60441787691c370b09cc6eeba138cb5c9ff768beb705c624816c0cbf.scope - libcontainer container 8cc910fb60441787691c370b09cc6eeba138cb5c9ff768beb705c624816c0cbf. Jan 29 12:04:29.083961 systemd[1]: Started cri-containerd-d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef.scope - libcontainer container d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef. Jan 29 12:04:29.157789 kubelet[2815]: I0129 12:04:29.157197 2815 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:29.158421 kubelet[2815]: E0129 12:04:29.158378 2815 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.23:6443/api/v1/nodes\": dial tcp 172.31.23.23:6443: connect: connection refused" node="ip-172-31-23-23" Jan 29 12:04:29.182114 containerd[1982]: time="2025-01-29T12:04:29.181875073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-23,Uid:bb1c025f833866bd82832c3ffed1b68e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa\"" Jan 29 12:04:29.189763 containerd[1982]: time="2025-01-29T12:04:29.189718108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-23,Uid:6f737d8717774f2ff25aa7df574f10b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cc910fb60441787691c370b09cc6eeba138cb5c9ff768beb705c624816c0cbf\"" Jan 29 12:04:29.199678 containerd[1982]: time="2025-01-29T12:04:29.199514854Z" level=info msg="CreateContainer within sandbox \"8cc910fb60441787691c370b09cc6eeba138cb5c9ff768beb705c624816c0cbf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:04:29.202014 containerd[1982]: time="2025-01-29T12:04:29.201855754Z" level=info msg="CreateContainer within sandbox \"0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:04:29.228432 containerd[1982]: time="2025-01-29T12:04:29.227473211Z" level=info msg="CreateContainer within sandbox \"8cc910fb60441787691c370b09cc6eeba138cb5c9ff768beb705c624816c0cbf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb367fc1e31b3bc7736126d92d4a7c6c5c874cd36ed474e020cabd353eb380f2\"" Jan 29 12:04:29.229321 containerd[1982]: time="2025-01-29T12:04:29.229284471Z" level=info msg="StartContainer for \"fb367fc1e31b3bc7736126d92d4a7c6c5c874cd36ed474e020cabd353eb380f2\"" Jan 29 12:04:29.246355 containerd[1982]: time="2025-01-29T12:04:29.246240823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-23,Uid:dba68a773473aa76831e6089b063cd8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef\"" Jan 29 12:04:29.246822 containerd[1982]: time="2025-01-29T12:04:29.246786027Z" level=info msg="CreateContainer within sandbox \"0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de\"" Jan 29 12:04:29.250352 containerd[1982]: time="2025-01-29T12:04:29.250305705Z" level=info msg="StartContainer for \"234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de\"" Jan 29 12:04:29.251288 containerd[1982]: time="2025-01-29T12:04:29.251250283Z" level=info msg="CreateContainer within sandbox \"d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:04:29.282413 systemd[1]: Started cri-containerd-fb367fc1e31b3bc7736126d92d4a7c6c5c874cd36ed474e020cabd353eb380f2.scope - libcontainer container fb367fc1e31b3bc7736126d92d4a7c6c5c874cd36ed474e020cabd353eb380f2. Jan 29 12:04:29.288427 containerd[1982]: time="2025-01-29T12:04:29.288344985Z" level=info msg="CreateContainer within sandbox \"d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b\"" Jan 29 12:04:29.291770 containerd[1982]: time="2025-01-29T12:04:29.291650652Z" level=info msg="StartContainer for \"381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b\"" Jan 29 12:04:29.326957 systemd[1]: Started cri-containerd-234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de.scope - libcontainer container 234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de. Jan 29 12:04:29.363831 systemd[1]: Started cri-containerd-381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b.scope - libcontainer container 381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b. Jan 29 12:04:29.419958 containerd[1982]: time="2025-01-29T12:04:29.419814340Z" level=info msg="StartContainer for \"fb367fc1e31b3bc7736126d92d4a7c6c5c874cd36ed474e020cabd353eb380f2\" returns successfully" Jan 29 12:04:29.442221 containerd[1982]: time="2025-01-29T12:04:29.442172079Z" level=info msg="StartContainer for \"234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de\" returns successfully" Jan 29 12:04:29.475520 containerd[1982]: time="2025-01-29T12:04:29.475352640Z" level=info msg="StartContainer for \"381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b\" returns successfully" Jan 29 12:04:29.484586 kubelet[2815]: E0129 12:04:29.484527 2815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:29.639433 kubelet[2815]: E0129 12:04:29.639246 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:29.641816 kubelet[2815]: E0129 12:04:29.641520 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:29.646595 kubelet[2815]: E0129 12:04:29.645838 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:30.649964 kubelet[2815]: E0129 12:04:30.649694 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:30.650926 kubelet[2815]: E0129 12:04:30.649913 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:30.762372 kubelet[2815]: I0129 12:04:30.762033 2815 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:31.652659 kubelet[2815]: E0129 12:04:31.652556 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:32.213582 kubelet[2815]: E0129 12:04:32.213402 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:32.338518 kubelet[2815]: E0129 12:04:32.338473 2815 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:32.368537 update_engine[1965]: I20250129 12:04:32.367781 1965 update_attempter.cc:509] Updating boot flags... Jan 29 12:04:32.517611 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3101) Jan 29 12:04:32.918577 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3102) Jan 29 12:04:33.312596 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3102) Jan 29 12:04:33.778478 kubelet[2815]: E0129 12:04:33.778422 2815 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-23\" not found" node="ip-172-31-23-23" Jan 29 12:04:33.884474 kubelet[2815]: I0129 12:04:33.884420 2815 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-23-23" Jan 29 12:04:33.939995 kubelet[2815]: I0129 12:04:33.939960 2815 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:33.952917 kubelet[2815]: E0129 12:04:33.952877 2815 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:33.953092 kubelet[2815]: I0129 12:04:33.952931 2815 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-23" Jan 29 12:04:33.956713 kubelet[2815]: E0129 12:04:33.956676 2815 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-23" Jan 29 12:04:33.956713 kubelet[2815]: I0129 12:04:33.956713 2815 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:33.958870 kubelet[2815]: E0129 12:04:33.958838 2815 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:34.490419 kubelet[2815]: I0129 12:04:34.490377 2815 apiserver.go:52] "Watching apiserver" Jan 29 12:04:34.534338 kubelet[2815]: I0129 12:04:34.534298 2815 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:04:36.120144 systemd[1]: Reloading requested from client PID 3355 ('systemctl') (unit session-7.scope)... Jan 29 12:04:36.120164 systemd[1]: Reloading... Jan 29 12:04:36.264530 zram_generator::config[3395]: No configuration found. Jan 29 12:04:36.463858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:36.656192 systemd[1]: Reloading finished in 535 ms. Jan 29 12:04:36.741906 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:36.782752 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:04:36.783015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:36.783083 systemd[1]: kubelet.service: Consumed 1.246s CPU time, 124.1M memory peak, 0B memory swap peak. Jan 29 12:04:36.799974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:37.120063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:37.133967 (kubelet)[3452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:37.263990 kubelet[3452]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:37.264830 kubelet[3452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:37.264830 kubelet[3452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:37.264830 kubelet[3452]: I0129 12:04:37.264787 3452 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:37.281509 kubelet[3452]: I0129 12:04:37.280369 3452 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 12:04:37.281509 kubelet[3452]: I0129 12:04:37.280405 3452 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:37.281509 kubelet[3452]: I0129 12:04:37.280861 3452 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 12:04:37.283267 kubelet[3452]: I0129 12:04:37.282831 3452 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:04:37.298224 kubelet[3452]: I0129 12:04:37.297864 3452 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:37.308443 kubelet[3452]: E0129 12:04:37.307193 3452 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:37.308443 kubelet[3452]: I0129 12:04:37.307263 3452 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:37.310866 kubelet[3452]: I0129 12:04:37.310830 3452 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:37.311127 kubelet[3452]: I0129 12:04:37.311086 3452 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:37.311463 kubelet[3452]: I0129 12:04:37.311125 3452 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-23","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:37.311617 kubelet[3452]: I0129 12:04:37.311469 3452 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.314612 3452 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.314706 3452 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.314953 3452 kubelet.go:446] "Attempting to sync node with API server" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.314977 3452 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.315003 3452 kubelet.go:352] "Adding apiserver pod source" Jan 29 12:04:37.315517 kubelet[3452]: I0129 12:04:37.315023 3452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:37.322261 kubelet[3452]: I0129 12:04:37.322233 3452 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:37.332355 kubelet[3452]: I0129 12:04:37.331154 3452 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:37.343838 kubelet[3452]: I0129 12:04:37.343761 3452 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 12:04:37.344007 kubelet[3452]: I0129 12:04:37.343997 3452 server.go:1287] "Started kubelet" Jan 29 12:04:37.351356 kubelet[3452]: I0129 12:04:37.351299 3452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:37.371587 kubelet[3452]: I0129 12:04:37.371469 3452 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:37.377527 kubelet[3452]: I0129 12:04:37.373992 3452 server.go:490] "Adding debug handlers to kubelet server" Jan 29 12:04:37.377527 kubelet[3452]: I0129 12:04:37.375235 3452 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:37.377527 kubelet[3452]: I0129 12:04:37.375470 3452 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:37.380365 kubelet[3452]: I0129 12:04:37.380201 3452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:37.392283 kubelet[3452]: I0129 12:04:37.391894 3452 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 12:04:37.394163 kubelet[3452]: I0129 12:04:37.394134 3452 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:04:37.394463 kubelet[3452]: I0129 12:04:37.394373 3452 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:37.414058 kubelet[3452]: E0129 12:04:37.414023 3452 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:37.414891 kubelet[3452]: I0129 12:04:37.414832 3452 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:37.414891 kubelet[3452]: I0129 12:04:37.414852 3452 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:37.415531 kubelet[3452]: I0129 12:04:37.414996 3452 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:37.432681 kubelet[3452]: I0129 12:04:37.432635 3452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:37.446375 kubelet[3452]: I0129 12:04:37.446172 3452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:37.446375 kubelet[3452]: I0129 12:04:37.446384 3452 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 12:04:37.446666 kubelet[3452]: I0129 12:04:37.446560 3452 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 12:04:37.446666 kubelet[3452]: I0129 12:04:37.446572 3452 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 12:04:37.452935 kubelet[3452]: E0129 12:04:37.451457 3452 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:37.552072 kubelet[3452]: E0129 12:04:37.552021 3452 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:04:37.558043 kubelet[3452]: I0129 12:04:37.558019 3452 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 12:04:37.558508 kubelet[3452]: I0129 12:04:37.558312 3452 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:37.558508 kubelet[3452]: I0129 12:04:37.558357 3452 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558721 3452 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558740 3452 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558767 3452 policy_none.go:49] "None policy: Start" Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558780 3452 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558794 3452 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:37.559029 kubelet[3452]: I0129 12:04:37.558950 3452 state_mem.go:75] "Updated machine memory state" Jan 29 12:04:37.565313 kubelet[3452]: I0129 12:04:37.565288 3452 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:37.566835 kubelet[3452]: I0129 12:04:37.565920 3452 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:37.566835 kubelet[3452]: I0129 12:04:37.565940 3452 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:37.566835 kubelet[3452]: I0129 12:04:37.566375 3452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:37.575139 kubelet[3452]: E0129 12:04:37.575111 3452 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 12:04:37.687804 kubelet[3452]: I0129 12:04:37.686262 3452 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-23" Jan 29 12:04:37.702681 kubelet[3452]: I0129 12:04:37.702614 3452 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-23-23" Jan 29 12:04:37.702826 kubelet[3452]: I0129 12:04:37.702749 3452 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-23-23" Jan 29 12:04:37.758512 kubelet[3452]: I0129 12:04:37.758381 3452 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-23" Jan 29 12:04:37.759711 kubelet[3452]: I0129 12:04:37.759504 3452 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:37.759897 kubelet[3452]: I0129 12:04:37.759614 3452 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:37.797857 kubelet[3452]: I0129 12:04:37.797391 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dba68a773473aa76831e6089b063cd8f-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-23\" (UID: \"dba68a773473aa76831e6089b063cd8f\") " pod="kube-system/kube-scheduler-ip-172-31-23-23" Jan 29 12:04:37.797857 kubelet[3452]: I0129 12:04:37.797473 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-ca-certs\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:37.797857 kubelet[3452]: I0129 12:04:37.797523 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:37.797857 kubelet[3452]: I0129 12:04:37.797551 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:37.797857 kubelet[3452]: I0129 12:04:37.797584 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:37.798211 kubelet[3452]: I0129 12:04:37.797611 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f737d8717774f2ff25aa7df574f10b8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-23\" (UID: \"6f737d8717774f2ff25aa7df574f10b8\") " pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:37.798211 kubelet[3452]: I0129 12:04:37.797637 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:37.798211 kubelet[3452]: I0129 12:04:37.797662 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:37.798211 kubelet[3452]: I0129 12:04:37.797689 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb1c025f833866bd82832c3ffed1b68e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-23\" (UID: \"bb1c025f833866bd82832c3ffed1b68e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-23" Jan 29 12:04:38.336965 kubelet[3452]: I0129 12:04:38.336915 3452 apiserver.go:52] "Watching apiserver" Jan 29 12:04:38.394723 kubelet[3452]: I0129 12:04:38.394648 3452 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:04:38.505933 kubelet[3452]: I0129 12:04:38.505902 3452 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:38.521644 kubelet[3452]: E0129 12:04:38.521606 3452 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-23\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-23" Jan 29 12:04:38.632824 kubelet[3452]: I0129 12:04:38.632654 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-23" podStartSLOduration=1.632633749 podStartE2EDuration="1.632633749s" podCreationTimestamp="2025-01-29 12:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:38.563801743 +0000 UTC m=+1.420397764" watchObservedRunningTime="2025-01-29 12:04:38.632633749 +0000 UTC m=+1.489229783" Jan 29 12:04:38.685476 kubelet[3452]: I0129 12:04:38.685292 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-23" podStartSLOduration=1.6852696539999998 podStartE2EDuration="1.685269654s" podCreationTimestamp="2025-01-29 12:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:38.634748801 +0000 UTC m=+1.491344820" watchObservedRunningTime="2025-01-29 12:04:38.685269654 +0000 UTC m=+1.541865674" Jan 29 12:04:38.732774 kubelet[3452]: I0129 12:04:38.732460 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-23" podStartSLOduration=1.732441087 podStartE2EDuration="1.732441087s" podCreationTimestamp="2025-01-29 12:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:38.688920898 +0000 UTC m=+1.545516918" watchObservedRunningTime="2025-01-29 12:04:38.732441087 +0000 UTC m=+1.589037106" Jan 29 12:04:42.091955 kubelet[3452]: I0129 12:04:42.091912 3452 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:04:42.094632 kubelet[3452]: I0129 12:04:42.094304 3452 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:04:42.094743 containerd[1982]: time="2025-01-29T12:04:42.093817234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:04:42.658422 systemd[1]: Created slice kubepods-besteffort-podd2fa6463_e42f_4a3d_8da6_d72872f689fd.slice - libcontainer container kubepods-besteffort-podd2fa6463_e42f_4a3d_8da6_d72872f689fd.slice. Jan 29 12:04:42.737554 kubelet[3452]: I0129 12:04:42.737413 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2fa6463-e42f-4a3d-8da6-d72872f689fd-kube-proxy\") pod \"kube-proxy-5v4hv\" (UID: \"d2fa6463-e42f-4a3d-8da6-d72872f689fd\") " pod="kube-system/kube-proxy-5v4hv" Jan 29 12:04:42.737929 kubelet[3452]: I0129 12:04:42.737563 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2fa6463-e42f-4a3d-8da6-d72872f689fd-xtables-lock\") pod \"kube-proxy-5v4hv\" (UID: \"d2fa6463-e42f-4a3d-8da6-d72872f689fd\") " pod="kube-system/kube-proxy-5v4hv" Jan 29 12:04:42.737929 kubelet[3452]: I0129 12:04:42.737593 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2fa6463-e42f-4a3d-8da6-d72872f689fd-lib-modules\") pod \"kube-proxy-5v4hv\" (UID: \"d2fa6463-e42f-4a3d-8da6-d72872f689fd\") " pod="kube-system/kube-proxy-5v4hv" Jan 29 12:04:42.737929 kubelet[3452]: I0129 12:04:42.737772 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jk9\" (UniqueName: \"kubernetes.io/projected/d2fa6463-e42f-4a3d-8da6-d72872f689fd-kube-api-access-s4jk9\") pod \"kube-proxy-5v4hv\" (UID: \"d2fa6463-e42f-4a3d-8da6-d72872f689fd\") " pod="kube-system/kube-proxy-5v4hv" Jan 29 12:04:42.847457 kubelet[3452]: E0129 12:04:42.847420 3452 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:04:42.847457 kubelet[3452]: E0129 12:04:42.847460 3452 projected.go:194] Error preparing data for projected volume kube-api-access-s4jk9 for pod kube-system/kube-proxy-5v4hv: configmap "kube-root-ca.crt" not found Jan 29 12:04:42.847719 kubelet[3452]: E0129 12:04:42.847549 3452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2fa6463-e42f-4a3d-8da6-d72872f689fd-kube-api-access-s4jk9 podName:d2fa6463-e42f-4a3d-8da6-d72872f689fd nodeName:}" failed. No retries permitted until 2025-01-29 12:04:43.347523082 +0000 UTC m=+6.204119094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s4jk9" (UniqueName: "kubernetes.io/projected/d2fa6463-e42f-4a3d-8da6-d72872f689fd-kube-api-access-s4jk9") pod "kube-proxy-5v4hv" (UID: "d2fa6463-e42f-4a3d-8da6-d72872f689fd") : configmap "kube-root-ca.crt" not found Jan 29 12:04:43.205826 systemd[1]: Created slice kubepods-besteffort-pod32de2295_f952_4971_ba67_8bd9040a3fbf.slice - libcontainer container kubepods-besteffort-pod32de2295_f952_4971_ba67_8bd9040a3fbf.slice. Jan 29 12:04:43.242510 kubelet[3452]: I0129 12:04:43.242377 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32de2295-f952-4971-ba67-8bd9040a3fbf-var-lib-calico\") pod \"tigera-operator-7d68577dc5-l7z77\" (UID: \"32de2295-f952-4971-ba67-8bd9040a3fbf\") " pod="tigera-operator/tigera-operator-7d68577dc5-l7z77" Jan 29 12:04:43.244351 kubelet[3452]: I0129 12:04:43.242563 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhl9r\" (UniqueName: \"kubernetes.io/projected/32de2295-f952-4971-ba67-8bd9040a3fbf-kube-api-access-xhl9r\") pod \"tigera-operator-7d68577dc5-l7z77\" (UID: \"32de2295-f952-4971-ba67-8bd9040a3fbf\") " pod="tigera-operator/tigera-operator-7d68577dc5-l7z77" Jan 29 12:04:43.510351 containerd[1982]: time="2025-01-29T12:04:43.510308566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-l7z77,Uid:32de2295-f952-4971-ba67-8bd9040a3fbf,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:04:43.562014 containerd[1982]: time="2025-01-29T12:04:43.561840533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.562014 containerd[1982]: time="2025-01-29T12:04:43.561917113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.562014 containerd[1982]: time="2025-01-29T12:04:43.561939260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.562572 containerd[1982]: time="2025-01-29T12:04:43.562091911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.576677 containerd[1982]: time="2025-01-29T12:04:43.576042748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v4hv,Uid:d2fa6463-e42f-4a3d-8da6-d72872f689fd,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:43.613138 systemd[1]: Started cri-containerd-d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570.scope - libcontainer container d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570. Jan 29 12:04:43.673960 containerd[1982]: time="2025-01-29T12:04:43.672771673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.673960 containerd[1982]: time="2025-01-29T12:04:43.673518792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.673960 containerd[1982]: time="2025-01-29T12:04:43.673540880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.678351 containerd[1982]: time="2025-01-29T12:04:43.676312150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.704075 containerd[1982]: time="2025-01-29T12:04:43.703812716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-l7z77,Uid:32de2295-f952-4971-ba67-8bd9040a3fbf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570\"" Jan 29 12:04:43.709257 containerd[1982]: time="2025-01-29T12:04:43.709105422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:04:43.733736 systemd[1]: Started cri-containerd-894786f990d502370be36d860347ab196f19c332e51e7f2efa788c52e913d8db.scope - libcontainer container 894786f990d502370be36d860347ab196f19c332e51e7f2efa788c52e913d8db. Jan 29 12:04:43.779420 containerd[1982]: time="2025-01-29T12:04:43.777026354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v4hv,Uid:d2fa6463-e42f-4a3d-8da6-d72872f689fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"894786f990d502370be36d860347ab196f19c332e51e7f2efa788c52e913d8db\"" Jan 29 12:04:43.787747 containerd[1982]: time="2025-01-29T12:04:43.787600017Z" level=info msg="CreateContainer within sandbox \"894786f990d502370be36d860347ab196f19c332e51e7f2efa788c52e913d8db\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:04:43.816433 containerd[1982]: time="2025-01-29T12:04:43.815855698Z" level=info msg="CreateContainer within sandbox \"894786f990d502370be36d860347ab196f19c332e51e7f2efa788c52e913d8db\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd61d4e029d2f0aea63810a9c2d69fd54a92559e24a32085621f30589510278e\"" Jan 29 12:04:43.818811 containerd[1982]: time="2025-01-29T12:04:43.816904143Z" level=info msg="StartContainer for \"bd61d4e029d2f0aea63810a9c2d69fd54a92559e24a32085621f30589510278e\"" Jan 29 12:04:43.874595 systemd[1]: Started cri-containerd-bd61d4e029d2f0aea63810a9c2d69fd54a92559e24a32085621f30589510278e.scope - libcontainer container bd61d4e029d2f0aea63810a9c2d69fd54a92559e24a32085621f30589510278e. Jan 29 12:04:43.956526 sudo[2303]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:43.960638 containerd[1982]: time="2025-01-29T12:04:43.959952168Z" level=info msg="StartContainer for \"bd61d4e029d2f0aea63810a9c2d69fd54a92559e24a32085621f30589510278e\" returns successfully" Jan 29 12:04:43.980806 sshd[2300]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:43.986252 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:04:43.987205 systemd[1]: sshd@6-172.31.23.23:22-139.178.68.195:46702.service: Deactivated successfully. Jan 29 12:04:43.989954 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:04:43.990174 systemd[1]: session-7.scope: Consumed 5.123s CPU time, 140.8M memory peak, 0B memory swap peak. Jan 29 12:04:43.991558 systemd-logind[1964]: Removed session 7. Jan 29 12:04:45.322764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078073899.mount: Deactivated successfully. Jan 29 12:04:46.635782 kubelet[3452]: I0129 12:04:46.635532 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5v4hv" podStartSLOduration=4.6352620909999995 podStartE2EDuration="4.635262091s" podCreationTimestamp="2025-01-29 12:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:44.554544831 +0000 UTC m=+7.411140851" watchObservedRunningTime="2025-01-29 12:04:46.635262091 +0000 UTC m=+9.491858140" Jan 29 12:04:47.428793 containerd[1982]: time="2025-01-29T12:04:47.428738400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.429907 containerd[1982]: time="2025-01-29T12:04:47.429759050Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:04:47.432297 containerd[1982]: time="2025-01-29T12:04:47.431001257Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.434177 containerd[1982]: time="2025-01-29T12:04:47.433295835Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:47.434177 containerd[1982]: time="2025-01-29T12:04:47.434036540Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.724882083s" Jan 29 12:04:47.434177 containerd[1982]: time="2025-01-29T12:04:47.434073924Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:04:47.437645 containerd[1982]: time="2025-01-29T12:04:47.437608882Z" level=info msg="CreateContainer within sandbox \"d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:04:47.456846 containerd[1982]: time="2025-01-29T12:04:47.456803352Z" level=info msg="CreateContainer within sandbox \"d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126\"" Jan 29 12:04:47.457808 containerd[1982]: time="2025-01-29T12:04:47.457772097Z" level=info msg="StartContainer for \"baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126\"" Jan 29 12:04:47.514776 systemd[1]: Started cri-containerd-baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126.scope - libcontainer container baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126. Jan 29 12:04:47.550256 containerd[1982]: time="2025-01-29T12:04:47.550133899Z" level=info msg="StartContainer for \"baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126\" returns successfully" Jan 29 12:04:48.561006 kubelet[3452]: I0129 12:04:48.560674 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-l7z77" podStartSLOduration=1.8319891940000002 podStartE2EDuration="5.56065387s" podCreationTimestamp="2025-01-29 12:04:43 +0000 UTC" firstStartedPulling="2025-01-29 12:04:43.706721492 +0000 UTC m=+6.563317501" lastFinishedPulling="2025-01-29 12:04:47.435386167 +0000 UTC m=+10.291982177" observedRunningTime="2025-01-29 12:04:48.560562072 +0000 UTC m=+11.417158092" watchObservedRunningTime="2025-01-29 12:04:48.56065387 +0000 UTC m=+11.417249888" Jan 29 12:04:51.250734 systemd[1]: Created slice kubepods-besteffort-podf9fc32be_99bf_4b49_b513_c2a84e7e40a9.slice - libcontainer container kubepods-besteffort-podf9fc32be_99bf_4b49_b513_c2a84e7e40a9.slice. Jan 29 12:04:51.300519 kubelet[3452]: I0129 12:04:51.300464 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc32be-99bf-4b49-b513-c2a84e7e40a9-tigera-ca-bundle\") pod \"calico-typha-669f6d6d4d-66hnk\" (UID: \"f9fc32be-99bf-4b49-b513-c2a84e7e40a9\") " pod="calico-system/calico-typha-669f6d6d4d-66hnk" Jan 29 12:04:51.301200 kubelet[3452]: I0129 12:04:51.300529 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9fc32be-99bf-4b49-b513-c2a84e7e40a9-typha-certs\") pod \"calico-typha-669f6d6d4d-66hnk\" (UID: \"f9fc32be-99bf-4b49-b513-c2a84e7e40a9\") " pod="calico-system/calico-typha-669f6d6d4d-66hnk" Jan 29 12:04:51.301200 kubelet[3452]: I0129 12:04:51.300559 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbncr\" (UniqueName: \"kubernetes.io/projected/f9fc32be-99bf-4b49-b513-c2a84e7e40a9-kube-api-access-dbncr\") pod \"calico-typha-669f6d6d4d-66hnk\" (UID: \"f9fc32be-99bf-4b49-b513-c2a84e7e40a9\") " pod="calico-system/calico-typha-669f6d6d4d-66hnk" Jan 29 12:04:51.465601 systemd[1]: Created slice kubepods-besteffort-poddd59069c_44fa_4f25_bcb7_0f3bd6115020.slice - libcontainer container kubepods-besteffort-poddd59069c_44fa_4f25_bcb7_0f3bd6115020.slice. Jan 29 12:04:51.503634 kubelet[3452]: I0129 12:04:51.503505 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd59069c-44fa-4f25-bcb7-0f3bd6115020-tigera-ca-bundle\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.503634 kubelet[3452]: I0129 12:04:51.503562 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd59069c-44fa-4f25-bcb7-0f3bd6115020-node-certs\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.503634 kubelet[3452]: I0129 12:04:51.503584 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-var-run-calico\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.503634 kubelet[3452]: I0129 12:04:51.503609 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-var-lib-calico\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.503634 kubelet[3452]: I0129 12:04:51.503635 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-xtables-lock\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504038 kubelet[3452]: I0129 12:04:51.503659 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-cni-log-dir\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504038 kubelet[3452]: I0129 12:04:51.503778 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-lib-modules\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504038 kubelet[3452]: I0129 12:04:51.503807 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-cni-net-dir\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504038 kubelet[3452]: I0129 12:04:51.503833 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-flexvol-driver-host\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504038 kubelet[3452]: I0129 12:04:51.503858 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-policysync\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504250 kubelet[3452]: I0129 12:04:51.503882 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd59069c-44fa-4f25-bcb7-0f3bd6115020-cni-bin-dir\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.504250 kubelet[3452]: I0129 12:04:51.503908 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gvx\" (UniqueName: \"kubernetes.io/projected/dd59069c-44fa-4f25-bcb7-0f3bd6115020-kube-api-access-n6gvx\") pod \"calico-node-xjbqz\" (UID: \"dd59069c-44fa-4f25-bcb7-0f3bd6115020\") " pod="calico-system/calico-node-xjbqz" Jan 29 12:04:51.556050 containerd[1982]: time="2025-01-29T12:04:51.555984620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669f6d6d4d-66hnk,Uid:f9fc32be-99bf-4b49-b513-c2a84e7e40a9,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:51.608899 kubelet[3452]: E0129 12:04:51.608701 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.608899 kubelet[3452]: W0129 12:04:51.608733 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.618020 kubelet[3452]: E0129 12:04:51.617978 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.618241 kubelet[3452]: W0129 12:04:51.618015 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.618241 kubelet[3452]: E0129 12:04:51.618125 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.622386 kubelet[3452]: E0129 12:04:51.622139 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.622386 kubelet[3452]: W0129 12:04:51.622171 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.622386 kubelet[3452]: E0129 12:04:51.622199 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.622763 kubelet[3452]: E0129 12:04:51.622743 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.623515 kubelet[3452]: W0129 12:04:51.622893 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.623515 kubelet[3452]: E0129 12:04:51.622922 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.629929 kubelet[3452]: E0129 12:04:51.628931 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.629929 kubelet[3452]: W0129 12:04:51.629072 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.629929 kubelet[3452]: E0129 12:04:51.629098 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.630859 kubelet[3452]: E0129 12:04:51.630829 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.630859 kubelet[3452]: W0129 12:04:51.630860 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.631032 kubelet[3452]: E0129 12:04:51.630883 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.631076 kubelet[3452]: E0129 12:04:51.631052 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.662610 kubelet[3452]: E0129 12:04:51.662558 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.662610 kubelet[3452]: W0129 12:04:51.662607 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.663101 kubelet[3452]: E0129 12:04:51.662635 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.663719 kubelet[3452]: E0129 12:04:51.663523 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.663719 kubelet[3452]: W0129 12:04:51.663549 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.663719 kubelet[3452]: E0129 12:04:51.663572 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.664631 kubelet[3452]: E0129 12:04:51.664012 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.664631 kubelet[3452]: W0129 12:04:51.664030 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.664631 kubelet[3452]: E0129 12:04:51.664048 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.671648 containerd[1982]: time="2025-01-29T12:04:51.670330506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:51.671648 containerd[1982]: time="2025-01-29T12:04:51.670406823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:51.671648 containerd[1982]: time="2025-01-29T12:04:51.670443353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.671648 containerd[1982]: time="2025-01-29T12:04:51.670620143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.730951 kubelet[3452]: E0129 12:04:51.728723 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.730951 kubelet[3452]: W0129 12:04:51.728757 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.730951 kubelet[3452]: E0129 12:04:51.728786 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.735727 systemd[1]: Started cri-containerd-2836f930dd0820ea176660daac3267ec5dcf84f3fe8b834d9a2c7ba063c0b7d9.scope - libcontainer container 2836f930dd0820ea176660daac3267ec5dcf84f3fe8b834d9a2c7ba063c0b7d9. Jan 29 12:04:51.753118 kubelet[3452]: E0129 12:04:51.753085 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.753249 kubelet[3452]: W0129 12:04:51.753114 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.753249 kubelet[3452]: E0129 12:04:51.753152 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.787092 containerd[1982]: time="2025-01-29T12:04:51.786809080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xjbqz,Uid:dd59069c-44fa-4f25-bcb7-0f3bd6115020,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:51.834171 containerd[1982]: time="2025-01-29T12:04:51.833894856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:51.834171 containerd[1982]: time="2025-01-29T12:04:51.833986756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:51.837055 containerd[1982]: time="2025-01-29T12:04:51.834113365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.837055 containerd[1982]: time="2025-01-29T12:04:51.836712044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.870054 kubelet[3452]: E0129 12:04:51.868650 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:04:51.893395 systemd[1]: Started cri-containerd-12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4.scope - libcontainer container 12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4. Jan 29 12:04:51.900774 containerd[1982]: time="2025-01-29T12:04:51.900718487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-669f6d6d4d-66hnk,Uid:f9fc32be-99bf-4b49-b513-c2a84e7e40a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2836f930dd0820ea176660daac3267ec5dcf84f3fe8b834d9a2c7ba063c0b7d9\"" Jan 29 12:04:51.903755 containerd[1982]: time="2025-01-29T12:04:51.903689486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:04:51.919635 kubelet[3452]: E0129 12:04:51.919511 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.919635 kubelet[3452]: W0129 12:04:51.919543 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.919635 kubelet[3452]: E0129 12:04:51.919572 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.920585 kubelet[3452]: E0129 12:04:51.920398 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.920585 kubelet[3452]: W0129 12:04:51.920420 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.920585 kubelet[3452]: E0129 12:04:51.920441 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.920918 kubelet[3452]: E0129 12:04:51.920727 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.920918 kubelet[3452]: W0129 12:04:51.920739 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.920918 kubelet[3452]: E0129 12:04:51.920755 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.921576 kubelet[3452]: E0129 12:04:51.921388 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.921576 kubelet[3452]: W0129 12:04:51.921405 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.921576 kubelet[3452]: E0129 12:04:51.921421 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.921911 kubelet[3452]: E0129 12:04:51.921832 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.921911 kubelet[3452]: W0129 12:04:51.921847 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.921911 kubelet[3452]: E0129 12:04:51.921864 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.922337 kubelet[3452]: E0129 12:04:51.922265 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.922337 kubelet[3452]: W0129 12:04:51.922278 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.922337 kubelet[3452]: E0129 12:04:51.922294 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.922809 kubelet[3452]: E0129 12:04:51.922679 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.922809 kubelet[3452]: W0129 12:04:51.922694 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.922809 kubelet[3452]: E0129 12:04:51.922709 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.923157 kubelet[3452]: E0129 12:04:51.923023 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.923157 kubelet[3452]: W0129 12:04:51.923036 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.923157 kubelet[3452]: E0129 12:04:51.923050 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.923443 kubelet[3452]: E0129 12:04:51.923370 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.923443 kubelet[3452]: W0129 12:04:51.923384 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.923443 kubelet[3452]: E0129 12:04:51.923398 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.923841 kubelet[3452]: E0129 12:04:51.923766 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.923841 kubelet[3452]: W0129 12:04:51.923779 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.923841 kubelet[3452]: E0129 12:04:51.923793 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.924432 kubelet[3452]: E0129 12:04:51.924248 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.924432 kubelet[3452]: W0129 12:04:51.924263 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.924432 kubelet[3452]: E0129 12:04:51.924277 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.925245 kubelet[3452]: E0129 12:04:51.925084 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.925245 kubelet[3452]: W0129 12:04:51.925099 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.925245 kubelet[3452]: E0129 12:04:51.925114 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.925682 kubelet[3452]: E0129 12:04:51.925602 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.925682 kubelet[3452]: W0129 12:04:51.925617 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.925682 kubelet[3452]: E0129 12:04:51.925632 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.926142 kubelet[3452]: E0129 12:04:51.926018 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.926142 kubelet[3452]: W0129 12:04:51.926034 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.926142 kubelet[3452]: E0129 12:04:51.926048 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.926453 kubelet[3452]: E0129 12:04:51.926359 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.926453 kubelet[3452]: W0129 12:04:51.926372 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.926453 kubelet[3452]: E0129 12:04:51.926385 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.927055 kubelet[3452]: E0129 12:04:51.926916 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.927055 kubelet[3452]: W0129 12:04:51.926930 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.927055 kubelet[3452]: E0129 12:04:51.926946 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.927466 kubelet[3452]: E0129 12:04:51.927341 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.927466 kubelet[3452]: W0129 12:04:51.927355 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.927466 kubelet[3452]: E0129 12:04:51.927369 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.927804 kubelet[3452]: E0129 12:04:51.927693 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.927804 kubelet[3452]: W0129 12:04:51.927707 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.927804 kubelet[3452]: E0129 12:04:51.927722 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.928150 kubelet[3452]: E0129 12:04:51.928078 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.928150 kubelet[3452]: W0129 12:04:51.928091 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.928150 kubelet[3452]: E0129 12:04:51.928104 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:51.928618 kubelet[3452]: E0129 12:04:51.928478 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:51.928618 kubelet[3452]: W0129 12:04:51.928517 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:51.928618 kubelet[3452]: E0129 12:04:51.928531 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.003668 containerd[1982]: time="2025-01-29T12:04:52.003624084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xjbqz,Uid:dd59069c-44fa-4f25-bcb7-0f3bd6115020,Namespace:calico-system,Attempt:0,} returns sandbox id \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\"" Jan 29 12:04:52.015563 kubelet[3452]: E0129 12:04:52.015321 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.015563 kubelet[3452]: W0129 12:04:52.015349 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.015563 kubelet[3452]: E0129 12:04:52.015374 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.015563 kubelet[3452]: I0129 12:04:52.015418 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea11a14-767a-481c-bdf8-160a5f9f8aed-kubelet-dir\") pod \"csi-node-driver-b2lqs\" (UID: \"cea11a14-767a-481c-bdf8-160a5f9f8aed\") " pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:04:52.016993 kubelet[3452]: E0129 12:04:52.016649 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.016993 kubelet[3452]: W0129 12:04:52.016673 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.016993 kubelet[3452]: E0129 12:04:52.016711 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.017799 kubelet[3452]: E0129 12:04:52.017475 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.017799 kubelet[3452]: W0129 12:04:52.017515 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.017799 kubelet[3452]: E0129 12:04:52.017536 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.017799 kubelet[3452]: I0129 12:04:52.016741 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92kvz\" (UniqueName: \"kubernetes.io/projected/cea11a14-767a-481c-bdf8-160a5f9f8aed-kube-api-access-92kvz\") pod \"csi-node-driver-b2lqs\" (UID: \"cea11a14-767a-481c-bdf8-160a5f9f8aed\") " pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:04:52.018773 kubelet[3452]: E0129 12:04:52.018404 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.018773 kubelet[3452]: W0129 12:04:52.018421 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.018773 kubelet[3452]: E0129 12:04:52.018439 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.020201 kubelet[3452]: E0129 12:04:52.020018 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.020201 kubelet[3452]: W0129 12:04:52.020035 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.020201 kubelet[3452]: E0129 12:04:52.020071 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.021136 kubelet[3452]: E0129 12:04:52.020907 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.021136 kubelet[3452]: W0129 12:04:52.020925 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.021136 kubelet[3452]: E0129 12:04:52.021098 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.022765 kubelet[3452]: E0129 12:04:52.021858 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.022765 kubelet[3452]: W0129 12:04:52.021873 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.022765 kubelet[3452]: E0129 12:04:52.021889 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.023039 kubelet[3452]: I0129 12:04:52.023018 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cea11a14-767a-481c-bdf8-160a5f9f8aed-socket-dir\") pod \"csi-node-driver-b2lqs\" (UID: \"cea11a14-767a-481c-bdf8-160a5f9f8aed\") " pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:04:52.023835 kubelet[3452]: E0129 12:04:52.023421 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.023835 kubelet[3452]: W0129 12:04:52.023435 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.023835 kubelet[3452]: E0129 12:04:52.023780 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.023835 kubelet[3452]: W0129 12:04:52.023794 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.023835 kubelet[3452]: E0129 12:04:52.023809 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.024105 kubelet[3452]: I0129 12:04:52.023838 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cea11a14-767a-481c-bdf8-160a5f9f8aed-registration-dir\") pod \"csi-node-driver-b2lqs\" (UID: \"cea11a14-767a-481c-bdf8-160a5f9f8aed\") " pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:04:52.024105 kubelet[3452]: E0129 12:04:52.024067 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.024105 kubelet[3452]: W0129 12:04:52.024079 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.024105 kubelet[3452]: E0129 12:04:52.024093 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.024281 kubelet[3452]: I0129 12:04:52.024118 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cea11a14-767a-481c-bdf8-160a5f9f8aed-varrun\") pod \"csi-node-driver-b2lqs\" (UID: \"cea11a14-767a-481c-bdf8-160a5f9f8aed\") " pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:04:52.024907 kubelet[3452]: E0129 12:04:52.024436 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.024907 kubelet[3452]: W0129 12:04:52.024452 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.024907 kubelet[3452]: E0129 12:04:52.024466 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.024907 kubelet[3452]: E0129 12:04:52.024611 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.024907 kubelet[3452]: E0129 12:04:52.024859 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.024907 kubelet[3452]: W0129 12:04:52.024871 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.024907 kubelet[3452]: E0129 12:04:52.024885 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.025737 kubelet[3452]: E0129 12:04:52.025248 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.025737 kubelet[3452]: W0129 12:04:52.025260 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.025737 kubelet[3452]: E0129 12:04:52.025274 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.026773 kubelet[3452]: E0129 12:04:52.026249 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.026773 kubelet[3452]: W0129 12:04:52.026451 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.026773 kubelet[3452]: E0129 12:04:52.026470 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.027431 kubelet[3452]: E0129 12:04:52.027375 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.027431 kubelet[3452]: W0129 12:04:52.027389 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.027431 kubelet[3452]: E0129 12:04:52.027404 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.126220 kubelet[3452]: E0129 12:04:52.126107 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.126220 kubelet[3452]: W0129 12:04:52.126138 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.126220 kubelet[3452]: E0129 12:04:52.126165 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.126636 kubelet[3452]: E0129 12:04:52.126572 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.126713 kubelet[3452]: W0129 12:04:52.126640 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.126713 kubelet[3452]: E0129 12:04:52.126679 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.130961 kubelet[3452]: E0129 12:04:52.130926 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.130961 kubelet[3452]: W0129 12:04:52.130957 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.131323 kubelet[3452]: E0129 12:04:52.131147 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.131666 kubelet[3452]: E0129 12:04:52.131644 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.131666 kubelet[3452]: W0129 12:04:52.131663 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.131861 kubelet[3452]: E0129 12:04:52.131694 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.135656 kubelet[3452]: E0129 12:04:52.135628 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.135656 kubelet[3452]: W0129 12:04:52.135654 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.139964 kubelet[3452]: E0129 12:04:52.139870 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.141209 kubelet[3452]: E0129 12:04:52.140947 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.141209 kubelet[3452]: W0129 12:04:52.141005 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.142219 kubelet[3452]: E0129 12:04:52.141546 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.142996 kubelet[3452]: E0129 12:04:52.142732 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.142996 kubelet[3452]: W0129 12:04:52.142784 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.143330 kubelet[3452]: E0129 12:04:52.143300 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.144371 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.149902 kubelet[3452]: W0129 12:04:52.144387 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.144745 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.145872 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.149902 kubelet[3452]: W0129 12:04:52.145885 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.146276 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.147289 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.149902 kubelet[3452]: W0129 12:04:52.147301 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.148238 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.149902 kubelet[3452]: E0129 12:04:52.148609 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.150669 kubelet[3452]: W0129 12:04:52.148621 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.150669 kubelet[3452]: E0129 12:04:52.149101 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.150669 kubelet[3452]: E0129 12:04:52.149797 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.150669 kubelet[3452]: W0129 12:04:52.149874 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.150669 kubelet[3452]: E0129 12:04:52.149976 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.150669 kubelet[3452]: E0129 12:04:52.150299 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.150669 kubelet[3452]: W0129 12:04:52.150311 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.150669 kubelet[3452]: E0129 12:04:52.150663 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.150799 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.152185 kubelet[3452]: W0129 12:04:52.150810 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.151158 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.152118 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.152185 kubelet[3452]: W0129 12:04:52.152130 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.152344 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.152413 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.152185 kubelet[3452]: W0129 12:04:52.152422 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.152658 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.152185 kubelet[3452]: E0129 12:04:52.152882 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.154850 kubelet[3452]: W0129 12:04:52.152893 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.152989 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.153265 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.154850 kubelet[3452]: W0129 12:04:52.153277 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.153616 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.153994 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.154850 kubelet[3452]: W0129 12:04:52.154005 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.154032 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.154850 kubelet[3452]: E0129 12:04:52.154405 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.154850 kubelet[3452]: W0129 12:04:52.154416 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.154433 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.154766 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.155454 kubelet[3452]: W0129 12:04:52.154777 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.154922 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.155157 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.155454 kubelet[3452]: W0129 12:04:52.155168 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.155433 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.155454 kubelet[3452]: W0129 12:04:52.155444 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.155603 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.155454 kubelet[3452]: E0129 12:04:52.155633 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.158319 kubelet[3452]: E0129 12:04:52.156217 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.158319 kubelet[3452]: W0129 12:04:52.156228 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.158319 kubelet[3452]: E0129 12:04:52.156245 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.158319 kubelet[3452]: E0129 12:04:52.157069 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.158319 kubelet[3452]: W0129 12:04:52.157116 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.158319 kubelet[3452]: E0129 12:04:52.157131 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:52.185282 kubelet[3452]: E0129 12:04:52.185236 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:52.185282 kubelet[3452]: W0129 12:04:52.185280 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:52.185628 kubelet[3452]: E0129 12:04:52.185353 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:53.392280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696082945.mount: Deactivated successfully. Jan 29 12:04:53.448004 kubelet[3452]: E0129 12:04:53.447778 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:04:54.428638 containerd[1982]: time="2025-01-29T12:04:54.428588188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:54.430306 containerd[1982]: time="2025-01-29T12:04:54.429566339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:04:54.430306 containerd[1982]: time="2025-01-29T12:04:54.430212916Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:54.443585 containerd[1982]: time="2025-01-29T12:04:54.443536579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:54.444521 containerd[1982]: time="2025-01-29T12:04:54.444314953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.540382353s" Jan 29 12:04:54.444521 containerd[1982]: time="2025-01-29T12:04:54.444358296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:04:54.452653 containerd[1982]: time="2025-01-29T12:04:54.452614057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:04:54.505094 containerd[1982]: time="2025-01-29T12:04:54.505053182Z" level=info msg="CreateContainer within sandbox \"2836f930dd0820ea176660daac3267ec5dcf84f3fe8b834d9a2c7ba063c0b7d9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:04:54.554313 containerd[1982]: time="2025-01-29T12:04:54.554260906Z" level=info msg="CreateContainer within sandbox \"2836f930dd0820ea176660daac3267ec5dcf84f3fe8b834d9a2c7ba063c0b7d9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d72e6787a486252719a70e18d0dec1e678fa5fff05d7b3a331a0cc013616306d\"" Jan 29 12:04:54.563759 containerd[1982]: time="2025-01-29T12:04:54.563711827Z" level=info msg="StartContainer for \"d72e6787a486252719a70e18d0dec1e678fa5fff05d7b3a331a0cc013616306d\"" Jan 29 12:04:54.641716 systemd[1]: Started cri-containerd-d72e6787a486252719a70e18d0dec1e678fa5fff05d7b3a331a0cc013616306d.scope - libcontainer container d72e6787a486252719a70e18d0dec1e678fa5fff05d7b3a331a0cc013616306d. Jan 29 12:04:54.725902 containerd[1982]: time="2025-01-29T12:04:54.725106437Z" level=info msg="StartContainer for \"d72e6787a486252719a70e18d0dec1e678fa5fff05d7b3a331a0cc013616306d\" returns successfully" Jan 29 12:04:55.474597 kubelet[3452]: E0129 12:04:55.472065 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:04:55.668027 kubelet[3452]: I0129 12:04:55.667953 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-669f6d6d4d-66hnk" podStartSLOduration=2.118748057 podStartE2EDuration="4.667927662s" podCreationTimestamp="2025-01-29 12:04:51 +0000 UTC" firstStartedPulling="2025-01-29 12:04:51.903066574 +0000 UTC m=+14.759662575" lastFinishedPulling="2025-01-29 12:04:54.45224616 +0000 UTC m=+17.308842180" observedRunningTime="2025-01-29 12:04:55.647716792 +0000 UTC m=+18.504312825" watchObservedRunningTime="2025-01-29 12:04:55.667927662 +0000 UTC m=+18.524523682" Jan 29 12:04:55.673472 kubelet[3452]: E0129 12:04:55.673145 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.673472 kubelet[3452]: W0129 12:04:55.673178 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.674466 kubelet[3452]: E0129 12:04:55.674431 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.674841 kubelet[3452]: E0129 12:04:55.674818 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.674841 kubelet[3452]: W0129 12:04:55.674837 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.674984 kubelet[3452]: E0129 12:04:55.674859 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.675179 kubelet[3452]: E0129 12:04:55.675161 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.675179 kubelet[3452]: W0129 12:04:55.675176 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.675415 kubelet[3452]: E0129 12:04:55.675200 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.675680 kubelet[3452]: E0129 12:04:55.675660 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.675680 kubelet[3452]: W0129 12:04:55.675675 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.675795 kubelet[3452]: E0129 12:04:55.675692 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.675947 kubelet[3452]: E0129 12:04:55.675928 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.675947 kubelet[3452]: W0129 12:04:55.675943 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.676160 kubelet[3452]: E0129 12:04:55.675957 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.676344 kubelet[3452]: E0129 12:04:55.676327 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.676344 kubelet[3452]: W0129 12:04:55.676341 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.676473 kubelet[3452]: E0129 12:04:55.676355 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.676656 kubelet[3452]: E0129 12:04:55.676581 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.676656 kubelet[3452]: W0129 12:04:55.676591 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.676656 kubelet[3452]: E0129 12:04:55.676602 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.676819 kubelet[3452]: E0129 12:04:55.676810 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.676981 kubelet[3452]: W0129 12:04:55.676820 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.676981 kubelet[3452]: E0129 12:04:55.676939 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.677201 kubelet[3452]: E0129 12:04:55.677182 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.677369 kubelet[3452]: W0129 12:04:55.677296 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.677369 kubelet[3452]: E0129 12:04:55.677315 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.677611 kubelet[3452]: E0129 12:04:55.677594 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.677611 kubelet[3452]: W0129 12:04:55.677608 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.677833 kubelet[3452]: E0129 12:04:55.677620 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.677833 kubelet[3452]: E0129 12:04:55.677828 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.678033 kubelet[3452]: W0129 12:04:55.677839 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.678033 kubelet[3452]: E0129 12:04:55.677851 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.678200 kubelet[3452]: E0129 12:04:55.678043 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.678200 kubelet[3452]: W0129 12:04:55.678053 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.678200 kubelet[3452]: E0129 12:04:55.678065 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.678629 kubelet[3452]: E0129 12:04:55.678479 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.678629 kubelet[3452]: W0129 12:04:55.678517 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.678629 kubelet[3452]: E0129 12:04:55.678619 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.678860 kubelet[3452]: E0129 12:04:55.678841 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.678860 kubelet[3452]: W0129 12:04:55.678856 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.678961 kubelet[3452]: E0129 12:04:55.678881 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.679109 kubelet[3452]: E0129 12:04:55.679090 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.679109 kubelet[3452]: W0129 12:04:55.679105 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.679425 kubelet[3452]: E0129 12:04:55.679118 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.782812 kubelet[3452]: E0129 12:04:55.782163 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.782812 kubelet[3452]: W0129 12:04:55.782195 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.782812 kubelet[3452]: E0129 12:04:55.782220 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.782812 kubelet[3452]: E0129 12:04:55.782661 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.782812 kubelet[3452]: W0129 12:04:55.782673 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.782812 kubelet[3452]: E0129 12:04:55.782691 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.782967 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.783826 kubelet[3452]: W0129 12:04:55.782978 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.783005 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.783283 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.783826 kubelet[3452]: W0129 12:04:55.783294 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.783322 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.783583 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.783826 kubelet[3452]: W0129 12:04:55.783594 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.783826 kubelet[3452]: E0129 12:04:55.783616 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.784304 kubelet[3452]: E0129 12:04:55.783947 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.784304 kubelet[3452]: W0129 12:04:55.783959 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.784304 kubelet[3452]: E0129 12:04:55.783987 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.784448 kubelet[3452]: E0129 12:04:55.784330 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.784448 kubelet[3452]: W0129 12:04:55.784350 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.784448 kubelet[3452]: E0129 12:04:55.784365 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.784673 kubelet[3452]: E0129 12:04:55.784591 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.784673 kubelet[3452]: W0129 12:04:55.784647 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.784673 kubelet[3452]: E0129 12:04:55.784661 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.787170 kubelet[3452]: E0129 12:04:55.785042 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.787170 kubelet[3452]: W0129 12:04:55.785056 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.787170 kubelet[3452]: E0129 12:04:55.785070 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.790697 kubelet[3452]: E0129 12:04:55.787766 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.790697 kubelet[3452]: W0129 12:04:55.787784 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.790697 kubelet[3452]: E0129 12:04:55.787818 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.790923 kubelet[3452]: E0129 12:04:55.790734 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.790923 kubelet[3452]: W0129 12:04:55.790752 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.790923 kubelet[3452]: E0129 12:04:55.790774 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.806475 kubelet[3452]: E0129 12:04:55.805381 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.806475 kubelet[3452]: W0129 12:04:55.805735 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.806475 kubelet[3452]: E0129 12:04:55.805778 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.814609 kubelet[3452]: E0129 12:04:55.813164 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.814609 kubelet[3452]: W0129 12:04:55.813194 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.814609 kubelet[3452]: E0129 12:04:55.813350 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.814853 kubelet[3452]: E0129 12:04:55.814704 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.814853 kubelet[3452]: W0129 12:04:55.814722 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.815459 kubelet[3452]: E0129 12:04:55.815424 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.815459 kubelet[3452]: W0129 12:04:55.815439 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.815585 kubelet[3452]: E0129 12:04:55.815460 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.815585 kubelet[3452]: E0129 12:04:55.815519 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.816974 kubelet[3452]: E0129 12:04:55.815803 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.816974 kubelet[3452]: W0129 12:04:55.815819 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.816974 kubelet[3452]: E0129 12:04:55.815835 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.821511 kubelet[3452]: E0129 12:04:55.817569 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.821511 kubelet[3452]: W0129 12:04:55.817586 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.821511 kubelet[3452]: E0129 12:04:55.817603 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:55.821511 kubelet[3452]: E0129 12:04:55.818202 3452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:55.821511 kubelet[3452]: W0129 12:04:55.818213 3452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:55.821511 kubelet[3452]: E0129 12:04:55.818236 3452 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:56.100750 containerd[1982]: time="2025-01-29T12:04:56.100606530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:56.104267 containerd[1982]: time="2025-01-29T12:04:56.104066092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:04:56.106670 containerd[1982]: time="2025-01-29T12:04:56.106599400Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:56.110206 containerd[1982]: time="2025-01-29T12:04:56.110142299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:56.111090 containerd[1982]: time="2025-01-29T12:04:56.110902545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.658245895s" Jan 29 12:04:56.111090 containerd[1982]: time="2025-01-29T12:04:56.110952422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:04:56.119081 containerd[1982]: time="2025-01-29T12:04:56.118001853Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:04:56.209840 containerd[1982]: time="2025-01-29T12:04:56.209794681Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223\"" Jan 29 12:04:56.210572 containerd[1982]: time="2025-01-29T12:04:56.210535106Z" level=info msg="StartContainer for \"e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223\"" Jan 29 12:04:56.268696 systemd[1]: Started cri-containerd-e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223.scope - libcontainer container e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223. Jan 29 12:04:56.351784 containerd[1982]: time="2025-01-29T12:04:56.351638523Z" level=info msg="StartContainer for \"e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223\" returns successfully" Jan 29 12:04:56.383913 systemd[1]: cri-containerd-e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223.scope: Deactivated successfully. Jan 29 12:04:56.441756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223-rootfs.mount: Deactivated successfully. Jan 29 12:04:56.494377 containerd[1982]: time="2025-01-29T12:04:56.447401244Z" level=info msg="shim disconnected" id=e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223 namespace=k8s.io Jan 29 12:04:56.494698 containerd[1982]: time="2025-01-29T12:04:56.494387040Z" level=warning msg="cleaning up after shim disconnected" id=e48e3098373d8396711b83427c667e23b431c541e23881254aa2cab4e1c46223 namespace=k8s.io Jan 29 12:04:56.494698 containerd[1982]: time="2025-01-29T12:04:56.494423245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:56.596620 kubelet[3452]: I0129 12:04:56.595527 3452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:56.597121 containerd[1982]: time="2025-01-29T12:04:56.596739535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:04:57.448628 kubelet[3452]: E0129 12:04:57.448176 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:04:59.447689 kubelet[3452]: E0129 12:04:59.447629 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:01.448348 kubelet[3452]: E0129 12:05:01.448294 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:03.450839 kubelet[3452]: E0129 12:05:03.450706 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:04.351650 containerd[1982]: time="2025-01-29T12:05:04.351589143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:04.353700 containerd[1982]: time="2025-01-29T12:05:04.353466304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:05:04.356452 containerd[1982]: time="2025-01-29T12:05:04.355858892Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:04.362011 containerd[1982]: time="2025-01-29T12:05:04.361956977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:04.363181 containerd[1982]: time="2025-01-29T12:05:04.363136713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.766351061s" Jan 29 12:05:04.363367 containerd[1982]: time="2025-01-29T12:05:04.363343964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:05:04.371037 containerd[1982]: time="2025-01-29T12:05:04.370990162Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:05:04.433599 containerd[1982]: time="2025-01-29T12:05:04.433113980Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b\"" Jan 29 12:05:04.441704 containerd[1982]: time="2025-01-29T12:05:04.440928410Z" level=info msg="StartContainer for \"1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b\"" Jan 29 12:05:04.562396 systemd[1]: run-containerd-runc-k8s.io-1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b-runc.v36fVe.mount: Deactivated successfully. Jan 29 12:05:04.572809 systemd[1]: Started cri-containerd-1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b.scope - libcontainer container 1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b. Jan 29 12:05:04.632790 containerd[1982]: time="2025-01-29T12:05:04.630234535Z" level=info msg="StartContainer for \"1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b\" returns successfully" Jan 29 12:05:05.451736 kubelet[3452]: E0129 12:05:05.451404 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:05.494006 systemd[1]: cri-containerd-1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b.scope: Deactivated successfully. Jan 29 12:05:05.581914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b-rootfs.mount: Deactivated successfully. Jan 29 12:05:05.602249 kubelet[3452]: I0129 12:05:05.602208 3452 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 12:05:05.677826 systemd[1]: Created slice kubepods-burstable-pod4e2f50f1_0e66_49b9_bbb2_5bccda0cafee.slice - libcontainer container kubepods-burstable-pod4e2f50f1_0e66_49b9_bbb2_5bccda0cafee.slice. Jan 29 12:05:05.712973 systemd[1]: Created slice kubepods-burstable-pod7cfb1c5f_4ca9_4bb5_93f1_2f4a39b22974.slice - libcontainer container kubepods-burstable-pod7cfb1c5f_4ca9_4bb5_93f1_2f4a39b22974.slice. Jan 29 12:05:05.713760 kubelet[3452]: I0129 12:05:05.713691 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fcsp\" (UniqueName: \"kubernetes.io/projected/944ed141-f023-4731-9f50-9c286c6b5644-kube-api-access-7fcsp\") pod \"calico-kube-controllers-c7c6ddf45-r7rfn\" (UID: \"944ed141-f023-4731-9f50-9c286c6b5644\") " pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" Jan 29 12:05:05.713878 kubelet[3452]: I0129 12:05:05.713775 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gpg\" (UniqueName: \"kubernetes.io/projected/627faa7d-aa48-4603-9ed6-cae093344773-kube-api-access-85gpg\") pod \"calico-apiserver-87c886d86-tj5l2\" (UID: \"627faa7d-aa48-4603-9ed6-cae093344773\") " pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" Jan 29 12:05:05.713878 kubelet[3452]: I0129 12:05:05.713811 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/627faa7d-aa48-4603-9ed6-cae093344773-calico-apiserver-certs\") pod \"calico-apiserver-87c886d86-tj5l2\" (UID: \"627faa7d-aa48-4603-9ed6-cae093344773\") " pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" Jan 29 12:05:05.713878 kubelet[3452]: I0129 12:05:05.713838 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgsz8\" (UniqueName: \"kubernetes.io/projected/7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974-kube-api-access-wgsz8\") pod \"coredns-668d6bf9bc-457dw\" (UID: \"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974\") " pod="kube-system/coredns-668d6bf9bc-457dw" Jan 29 12:05:05.713878 kubelet[3452]: I0129 12:05:05.713868 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/560b730b-9376-42a3-8fbf-e9fdd620b24a-calico-apiserver-certs\") pod \"calico-apiserver-87c886d86-cz5vh\" (UID: \"560b730b-9376-42a3-8fbf-e9fdd620b24a\") " pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" Jan 29 12:05:05.714076 kubelet[3452]: I0129 12:05:05.713896 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974-config-volume\") pod \"coredns-668d6bf9bc-457dw\" (UID: \"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974\") " pod="kube-system/coredns-668d6bf9bc-457dw" Jan 29 12:05:05.714076 kubelet[3452]: I0129 12:05:05.713924 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq4np\" (UniqueName: \"kubernetes.io/projected/4e2f50f1-0e66-49b9-bbb2-5bccda0cafee-kube-api-access-sq4np\") pod \"coredns-668d6bf9bc-trscl\" (UID: \"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee\") " pod="kube-system/coredns-668d6bf9bc-trscl" Jan 29 12:05:05.714076 kubelet[3452]: I0129 12:05:05.713953 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e2f50f1-0e66-49b9-bbb2-5bccda0cafee-config-volume\") pod \"coredns-668d6bf9bc-trscl\" (UID: \"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee\") " pod="kube-system/coredns-668d6bf9bc-trscl" Jan 29 12:05:05.714076 kubelet[3452]: I0129 12:05:05.713985 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpr7b\" (UniqueName: \"kubernetes.io/projected/560b730b-9376-42a3-8fbf-e9fdd620b24a-kube-api-access-hpr7b\") pod \"calico-apiserver-87c886d86-cz5vh\" (UID: \"560b730b-9376-42a3-8fbf-e9fdd620b24a\") " pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" Jan 29 12:05:05.714076 kubelet[3452]: I0129 12:05:05.714032 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/944ed141-f023-4731-9f50-9c286c6b5644-tigera-ca-bundle\") pod \"calico-kube-controllers-c7c6ddf45-r7rfn\" (UID: \"944ed141-f023-4731-9f50-9c286c6b5644\") " pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" Jan 29 12:05:05.718359 containerd[1982]: time="2025-01-29T12:05:05.717012565Z" level=info msg="shim disconnected" id=1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b namespace=k8s.io Jan 29 12:05:05.718359 containerd[1982]: time="2025-01-29T12:05:05.717318635Z" level=warning msg="cleaning up after shim disconnected" id=1c1872d195489f764e89e9702250fb18e3cee44a11c438af776dfbd8f7938d1b namespace=k8s.io Jan 29 12:05:05.718359 containerd[1982]: time="2025-01-29T12:05:05.717333559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:05.742534 systemd[1]: Created slice kubepods-besteffort-pod560b730b_9376_42a3_8fbf_e9fdd620b24a.slice - libcontainer container kubepods-besteffort-pod560b730b_9376_42a3_8fbf_e9fdd620b24a.slice. Jan 29 12:05:05.759115 systemd[1]: Created slice kubepods-besteffort-pod627faa7d_aa48_4603_9ed6_cae093344773.slice - libcontainer container kubepods-besteffort-pod627faa7d_aa48_4603_9ed6_cae093344773.slice. Jan 29 12:05:05.771795 systemd[1]: Created slice kubepods-besteffort-pod944ed141_f023_4731_9f50_9c286c6b5644.slice - libcontainer container kubepods-besteffort-pod944ed141_f023_4731_9f50_9c286c6b5644.slice. Jan 29 12:05:05.992375 containerd[1982]: time="2025-01-29T12:05:05.990333514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trscl,Uid:4e2f50f1-0e66-49b9-bbb2-5bccda0cafee,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:06.025632 containerd[1982]: time="2025-01-29T12:05:06.025031373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-457dw,Uid:7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:06.053931 containerd[1982]: time="2025-01-29T12:05:06.053473146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-cz5vh,Uid:560b730b-9376-42a3-8fbf-e9fdd620b24a,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:05:06.098196 containerd[1982]: time="2025-01-29T12:05:06.098137454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7c6ddf45-r7rfn,Uid:944ed141-f023-4731-9f50-9c286c6b5644,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:06.108194 containerd[1982]: time="2025-01-29T12:05:06.108130498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-tj5l2,Uid:627faa7d-aa48-4603-9ed6-cae093344773,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:05:06.542690 containerd[1982]: time="2025-01-29T12:05:06.542630586Z" level=error msg="Failed to destroy network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.545240 containerd[1982]: time="2025-01-29T12:05:06.544543812Z" level=error msg="Failed to destroy network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.553684 containerd[1982]: time="2025-01-29T12:05:06.552331553Z" level=error msg="encountered an error cleaning up failed sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.553684 containerd[1982]: time="2025-01-29T12:05:06.552534994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-tj5l2,Uid:627faa7d-aa48-4603-9ed6-cae093344773,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.553966 containerd[1982]: time="2025-01-29T12:05:06.553924346Z" level=error msg="encountered an error cleaning up failed sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.554097 containerd[1982]: time="2025-01-29T12:05:06.554070252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7c6ddf45-r7rfn,Uid:944ed141-f023-4731-9f50-9c286c6b5644,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.560705 containerd[1982]: time="2025-01-29T12:05:06.560645618Z" level=error msg="Failed to destroy network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.561076 containerd[1982]: time="2025-01-29T12:05:06.561042142Z" level=error msg="encountered an error cleaning up failed sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.561253 containerd[1982]: time="2025-01-29T12:05:06.561109081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-cz5vh,Uid:560b730b-9376-42a3-8fbf-e9fdd620b24a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.563166 containerd[1982]: time="2025-01-29T12:05:06.563127071Z" level=error msg="Failed to destroy network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.563661 containerd[1982]: time="2025-01-29T12:05:06.563625056Z" level=error msg="encountered an error cleaning up failed sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.568857 kubelet[3452]: E0129 12:05:06.563830 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.568857 kubelet[3452]: E0129 12:05:06.563946 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" Jan 29 12:05:06.568857 kubelet[3452]: E0129 12:05:06.563993 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" Jan 29 12:05:06.571213 kubelet[3452]: E0129 12:05:06.564063 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87c886d86-tj5l2_calico-apiserver(627faa7d-aa48-4603-9ed6-cae093344773)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87c886d86-tj5l2_calico-apiserver(627faa7d-aa48-4603-9ed6-cae093344773)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" podUID="627faa7d-aa48-4603-9ed6-cae093344773" Jan 29 12:05:06.571213 kubelet[3452]: E0129 12:05:06.565644 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.571213 kubelet[3452]: E0129 12:05:06.565724 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" Jan 29 12:05:06.573432 kubelet[3452]: E0129 12:05:06.565747 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" Jan 29 12:05:06.573432 kubelet[3452]: E0129 12:05:06.567580 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c7c6ddf45-r7rfn_calico-system(944ed141-f023-4731-9f50-9c286c6b5644)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c7c6ddf45-r7rfn_calico-system(944ed141-f023-4731-9f50-9c286c6b5644)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" podUID="944ed141-f023-4731-9f50-9c286c6b5644" Jan 29 12:05:06.573432 kubelet[3452]: E0129 12:05:06.570835 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.574001 kubelet[3452]: E0129 12:05:06.570902 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" Jan 29 12:05:06.574001 kubelet[3452]: E0129 12:05:06.570929 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" Jan 29 12:05:06.574001 kubelet[3452]: E0129 12:05:06.571000 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-87c886d86-cz5vh_calico-apiserver(560b730b-9376-42a3-8fbf-e9fdd620b24a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-87c886d86-cz5vh_calico-apiserver(560b730b-9376-42a3-8fbf-e9fdd620b24a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" podUID="560b730b-9376-42a3-8fbf-e9fdd620b24a" Jan 29 12:05:06.579757 containerd[1982]: time="2025-01-29T12:05:06.573652983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-457dw,Uid:7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.587094 kubelet[3452]: E0129 12:05:06.586814 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.587094 kubelet[3452]: E0129 12:05:06.586879 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-457dw" Jan 29 12:05:06.587094 kubelet[3452]: E0129 12:05:06.586910 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-457dw" Jan 29 12:05:06.589141 kubelet[3452]: E0129 12:05:06.587038 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-457dw_kube-system(7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-457dw_kube-system(7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-457dw" podUID="7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974" Jan 29 12:05:06.615823 containerd[1982]: time="2025-01-29T12:05:06.615773374Z" level=error msg="Failed to destroy network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.616959 containerd[1982]: time="2025-01-29T12:05:06.616828544Z" level=error msg="encountered an error cleaning up failed sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.616959 containerd[1982]: time="2025-01-29T12:05:06.616904379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trscl,Uid:4e2f50f1-0e66-49b9-bbb2-5bccda0cafee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.618706 kubelet[3452]: E0129 12:05:06.618006 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.618706 kubelet[3452]: E0129 12:05:06.618373 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trscl" Jan 29 12:05:06.618706 kubelet[3452]: E0129 12:05:06.618547 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trscl" Jan 29 12:05:06.620012 kubelet[3452]: E0129 12:05:06.618824 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-trscl_kube-system(4e2f50f1-0e66-49b9-bbb2-5bccda0cafee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-trscl_kube-system(4e2f50f1-0e66-49b9-bbb2-5bccda0cafee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-trscl" podUID="4e2f50f1-0e66-49b9-bbb2-5bccda0cafee" Jan 29 12:05:06.622164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952-shm.mount: Deactivated successfully. Jan 29 12:05:06.666314 kubelet[3452]: I0129 12:05:06.666279 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:06.669767 kubelet[3452]: I0129 12:05:06.669117 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:06.673183 kubelet[3452]: I0129 12:05:06.672775 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:06.673777 containerd[1982]: time="2025-01-29T12:05:06.673560022Z" level=info msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" Jan 29 12:05:06.675019 containerd[1982]: time="2025-01-29T12:05:06.674978187Z" level=info msg="Ensure that sandbox 3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad in task-service has been cleanup successfully" Jan 29 12:05:06.682964 containerd[1982]: time="2025-01-29T12:05:06.682305810Z" level=info msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" Jan 29 12:05:06.683953 containerd[1982]: time="2025-01-29T12:05:06.683591605Z" level=info msg="Ensure that sandbox e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f in task-service has been cleanup successfully" Jan 29 12:05:06.688133 containerd[1982]: time="2025-01-29T12:05:06.688023985Z" level=info msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" Jan 29 12:05:06.689709 containerd[1982]: time="2025-01-29T12:05:06.689645038Z" level=info msg="Ensure that sandbox 5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f in task-service has been cleanup successfully" Jan 29 12:05:06.717202 containerd[1982]: time="2025-01-29T12:05:06.717126139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:05:06.760349 kubelet[3452]: I0129 12:05:06.760305 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:06.783633 containerd[1982]: time="2025-01-29T12:05:06.783390494Z" level=info msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" Jan 29 12:05:06.788088 containerd[1982]: time="2025-01-29T12:05:06.786835041Z" level=info msg="Ensure that sandbox d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5 in task-service has been cleanup successfully" Jan 29 12:05:06.831934 kubelet[3452]: I0129 12:05:06.826428 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:06.838964 containerd[1982]: time="2025-01-29T12:05:06.838425393Z" level=info msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" Jan 29 12:05:06.838964 containerd[1982]: time="2025-01-29T12:05:06.838664485Z" level=info msg="Ensure that sandbox 3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952 in task-service has been cleanup successfully" Jan 29 12:05:06.885943 containerd[1982]: time="2025-01-29T12:05:06.885774515Z" level=error msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" failed" error="failed to destroy network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.889308 kubelet[3452]: E0129 12:05:06.887403 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:06.889308 kubelet[3452]: E0129 12:05:06.887500 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad"} Jan 29 12:05:06.889308 kubelet[3452]: E0129 12:05:06.887841 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"944ed141-f023-4731-9f50-9c286c6b5644\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:06.889308 kubelet[3452]: E0129 12:05:06.887882 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"944ed141-f023-4731-9f50-9c286c6b5644\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" podUID="944ed141-f023-4731-9f50-9c286c6b5644" Jan 29 12:05:06.894681 containerd[1982]: time="2025-01-29T12:05:06.894325387Z" level=error msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" failed" error="failed to destroy network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.895695 kubelet[3452]: E0129 12:05:06.895625 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:06.896086 kubelet[3452]: E0129 12:05:06.895695 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f"} Jan 29 12:05:06.896086 kubelet[3452]: E0129 12:05:06.895741 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"627faa7d-aa48-4603-9ed6-cae093344773\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:06.896086 kubelet[3452]: E0129 12:05:06.895813 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"627faa7d-aa48-4603-9ed6-cae093344773\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" podUID="627faa7d-aa48-4603-9ed6-cae093344773" Jan 29 12:05:06.942370 containerd[1982]: time="2025-01-29T12:05:06.942309744Z" level=error msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" failed" error="failed to destroy network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.943130 kubelet[3452]: E0129 12:05:06.942910 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:06.943130 kubelet[3452]: E0129 12:05:06.942974 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f"} Jan 29 12:05:06.943130 kubelet[3452]: E0129 12:05:06.943017 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:06.943130 kubelet[3452]: E0129 12:05:06.943051 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-457dw" podUID="7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974" Jan 29 12:05:06.955702 containerd[1982]: time="2025-01-29T12:05:06.955648023Z" level=error msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" failed" error="failed to destroy network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.956425 kubelet[3452]: E0129 12:05:06.956297 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:06.956936 kubelet[3452]: E0129 12:05:06.956895 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952"} Jan 29 12:05:06.957182 kubelet[3452]: E0129 12:05:06.957144 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:06.957408 kubelet[3452]: E0129 12:05:06.957365 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-trscl" podUID="4e2f50f1-0e66-49b9-bbb2-5bccda0cafee" Jan 29 12:05:06.967198 containerd[1982]: time="2025-01-29T12:05:06.967139920Z" level=error msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" failed" error="failed to destroy network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:06.967765 kubelet[3452]: E0129 12:05:06.967465 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:06.967765 kubelet[3452]: E0129 12:05:06.967551 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5"} Jan 29 12:05:06.967765 kubelet[3452]: E0129 12:05:06.967591 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"560b730b-9376-42a3-8fbf-e9fdd620b24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:06.967765 kubelet[3452]: E0129 12:05:06.967615 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"560b730b-9376-42a3-8fbf-e9fdd620b24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" podUID="560b730b-9376-42a3-8fbf-e9fdd620b24a" Jan 29 12:05:07.456480 systemd[1]: Created slice kubepods-besteffort-podcea11a14_767a_481c_bdf8_160a5f9f8aed.slice - libcontainer container kubepods-besteffort-podcea11a14_767a_481c_bdf8_160a5f9f8aed.slice. Jan 29 12:05:07.459814 containerd[1982]: time="2025-01-29T12:05:07.459764213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2lqs,Uid:cea11a14-767a-481c-bdf8-160a5f9f8aed,Namespace:calico-system,Attempt:0,}" Jan 29 12:05:07.559787 containerd[1982]: time="2025-01-29T12:05:07.559726421Z" level=error msg="Failed to destroy network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:07.560146 containerd[1982]: time="2025-01-29T12:05:07.560108372Z" level=error msg="encountered an error cleaning up failed sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:07.560262 containerd[1982]: time="2025-01-29T12:05:07.560189540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2lqs,Uid:cea11a14-767a-481c-bdf8-160a5f9f8aed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:07.562780 kubelet[3452]: E0129 12:05:07.562730 3452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:07.562907 kubelet[3452]: E0129 12:05:07.562805 3452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:05:07.562907 kubelet[3452]: E0129 12:05:07.562835 3452 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2lqs" Jan 29 12:05:07.562907 kubelet[3452]: E0129 12:05:07.562886 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b2lqs_calico-system(cea11a14-767a-481c-bdf8-160a5f9f8aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b2lqs_calico-system(cea11a14-767a-481c-bdf8-160a5f9f8aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:07.568782 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313-shm.mount: Deactivated successfully. Jan 29 12:05:07.831426 kubelet[3452]: I0129 12:05:07.831334 3452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:07.832294 containerd[1982]: time="2025-01-29T12:05:07.832245718Z" level=info msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" Jan 29 12:05:07.832661 containerd[1982]: time="2025-01-29T12:05:07.832479700Z" level=info msg="Ensure that sandbox 3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313 in task-service has been cleanup successfully" Jan 29 12:05:07.904660 containerd[1982]: time="2025-01-29T12:05:07.904602705Z" level=error msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" failed" error="failed to destroy network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:05:07.904976 kubelet[3452]: E0129 12:05:07.904856 3452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:07.904976 kubelet[3452]: E0129 12:05:07.904918 3452 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313"} Jan 29 12:05:07.904976 kubelet[3452]: E0129 12:05:07.904965 3452 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cea11a14-767a-481c-bdf8-160a5f9f8aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:05:07.905226 kubelet[3452]: E0129 12:05:07.904998 3452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cea11a14-767a-481c-bdf8-160a5f9f8aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2lqs" podUID="cea11a14-767a-481c-bdf8-160a5f9f8aed" Jan 29 12:05:15.416177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040752498.mount: Deactivated successfully. Jan 29 12:05:15.735797 containerd[1982]: time="2025-01-29T12:05:15.708494660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:05:15.736621 containerd[1982]: time="2025-01-29T12:05:15.723857934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:15.792470 containerd[1982]: time="2025-01-29T12:05:15.792344743Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:15.796845 containerd[1982]: time="2025-01-29T12:05:15.795252898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:15.800972 containerd[1982]: time="2025-01-29T12:05:15.800910895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.076396534s" Jan 29 12:05:15.800972 containerd[1982]: time="2025-01-29T12:05:15.800970125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:05:15.934853 containerd[1982]: time="2025-01-29T12:05:15.932791599Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:05:16.069443 containerd[1982]: time="2025-01-29T12:05:16.066884847Z" level=info msg="CreateContainer within sandbox \"12e616a953c92a6d079d307c218701a4e0af350825305c7769094c79133309b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b33fa8199a2361ec27d3de42388161e8fcc54679a9b0dd71415b3f2446f7652f\"" Jan 29 12:05:16.075983 containerd[1982]: time="2025-01-29T12:05:16.075854910Z" level=info msg="StartContainer for \"b33fa8199a2361ec27d3de42388161e8fcc54679a9b0dd71415b3f2446f7652f\"" Jan 29 12:05:16.221108 systemd[1]: Started cri-containerd-b33fa8199a2361ec27d3de42388161e8fcc54679a9b0dd71415b3f2446f7652f.scope - libcontainer container b33fa8199a2361ec27d3de42388161e8fcc54679a9b0dd71415b3f2446f7652f. Jan 29 12:05:16.270095 containerd[1982]: time="2025-01-29T12:05:16.270054450Z" level=info msg="StartContainer for \"b33fa8199a2361ec27d3de42388161e8fcc54679a9b0dd71415b3f2446f7652f\" returns successfully" Jan 29 12:05:16.410453 kubelet[3452]: I0129 12:05:16.409295 3452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:16.797455 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:05:16.809545 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:05:17.017041 kubelet[3452]: I0129 12:05:16.992865 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xjbqz" podStartSLOduration=2.169937481 podStartE2EDuration="25.96663802s" podCreationTimestamp="2025-01-29 12:04:51 +0000 UTC" firstStartedPulling="2025-01-29 12:04:52.006322771 +0000 UTC m=+14.862918784" lastFinishedPulling="2025-01-29 12:05:15.803023315 +0000 UTC m=+38.659619323" observedRunningTime="2025-01-29 12:05:16.960335097 +0000 UTC m=+39.816931139" watchObservedRunningTime="2025-01-29 12:05:16.96663802 +0000 UTC m=+39.823234039" Jan 29 12:05:19.195774 kernel: bpftool[4812]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:05:19.451290 containerd[1982]: time="2025-01-29T12:05:19.451144243Z" level=info msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" Jan 29 12:05:19.452694 containerd[1982]: time="2025-01-29T12:05:19.452518704Z" level=info msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" Jan 29 12:05:19.457513 containerd[1982]: time="2025-01-29T12:05:19.455950804Z" level=info msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" Jan 29 12:05:19.651333 (udev-worker)[4607]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:19.660675 systemd-networkd[1826]: vxlan.calico: Link UP Jan 29 12:05:19.660982 systemd-networkd[1826]: vxlan.calico: Gained carrier Jan 29 12:05:19.755363 (udev-worker)[4605]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.734 [INFO][4854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.734 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" iface="eth0" netns="/var/run/netns/cni-effdcae2-3936-84c3-3d28-7092b2b510c4" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.736 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" iface="eth0" netns="/var/run/netns/cni-effdcae2-3936-84c3-3d28-7092b2b510c4" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.736 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" iface="eth0" netns="/var/run/netns/cni-effdcae2-3936-84c3-3d28-7092b2b510c4" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.736 [INFO][4854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:19.736 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.210 [INFO][4901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.216 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.216 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.232 [WARNING][4901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.233 [INFO][4901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.236 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:20.244394 containerd[1982]: 2025-01-29 12:05:20.240 [INFO][4854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:20.250223 containerd[1982]: time="2025-01-29T12:05:20.244549541Z" level=info msg="TearDown network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" successfully" Jan 29 12:05:20.250223 containerd[1982]: time="2025-01-29T12:05:20.244597066Z" level=info msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" returns successfully" Jan 29 12:05:20.252178 containerd[1982]: time="2025-01-29T12:05:20.250846114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-tj5l2,Uid:627faa7d-aa48-4603-9ed6-cae093344773,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:05:20.256009 systemd[1]: run-netns-cni\x2deffdcae2\x2d3936\x2d84c3\x2d3d28\x2d7092b2b510c4.mount: Deactivated successfully. Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.713 [INFO][4857] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.718 [INFO][4857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" iface="eth0" netns="/var/run/netns/cni-5d4c5a12-e21a-8d67-3e0d-c7ccef580547" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.722 [INFO][4857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" iface="eth0" netns="/var/run/netns/cni-5d4c5a12-e21a-8d67-3e0d-c7ccef580547" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.734 [INFO][4857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" iface="eth0" netns="/var/run/netns/cni-5d4c5a12-e21a-8d67-3e0d-c7ccef580547" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.735 [INFO][4857] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:19.735 [INFO][4857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.215 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.216 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.235 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.250 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.250 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.257 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:20.271751 containerd[1982]: 2025-01-29 12:05:20.264 [INFO][4857] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:20.271751 containerd[1982]: time="2025-01-29T12:05:20.267741764Z" level=info msg="TearDown network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" successfully" Jan 29 12:05:20.271751 containerd[1982]: time="2025-01-29T12:05:20.267791561Z" level=info msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" returns successfully" Jan 29 12:05:20.277345 systemd[1]: run-netns-cni\x2d5d4c5a12\x2de21a\x2d8d67\x2d3e0d\x2dc7ccef580547.mount: Deactivated successfully. Jan 29 12:05:20.280423 containerd[1982]: time="2025-01-29T12:05:20.280311864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-457dw,Uid:7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974,Namespace:kube-system,Attempt:1,}" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.718 [INFO][4855] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.719 [INFO][4855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" iface="eth0" netns="/var/run/netns/cni-39b39f6b-247d-d927-2dcb-904af6baad88" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.721 [INFO][4855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" iface="eth0" netns="/var/run/netns/cni-39b39f6b-247d-d927-2dcb-904af6baad88" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.725 [INFO][4855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" iface="eth0" netns="/var/run/netns/cni-39b39f6b-247d-d927-2dcb-904af6baad88" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.732 [INFO][4855] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:19.732 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.212 [INFO][4899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.216 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.256 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.281 [WARNING][4899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.281 [INFO][4899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.288 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:20.304592 containerd[1982]: 2025-01-29 12:05:20.293 [INFO][4855] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:20.312962 containerd[1982]: time="2025-01-29T12:05:20.312253076Z" level=info msg="TearDown network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" successfully" Jan 29 12:05:20.312962 containerd[1982]: time="2025-01-29T12:05:20.312302641Z" level=info msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" returns successfully" Jan 29 12:05:20.317653 containerd[1982]: time="2025-01-29T12:05:20.317154298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7c6ddf45-r7rfn,Uid:944ed141-f023-4731-9f50-9c286c6b5644,Namespace:calico-system,Attempt:1,}" Jan 29 12:05:20.329286 systemd[1]: run-netns-cni\x2d39b39f6b\x2d247d\x2dd927\x2d2dcb\x2d904af6baad88.mount: Deactivated successfully. Jan 29 12:05:20.737849 (udev-worker)[4915]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:05:20.738733 systemd-networkd[1826]: cali3ec77b49e65: Link UP Jan 29 12:05:20.741580 systemd-networkd[1826]: cali3ec77b49e65: Gained carrier Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.521 [INFO][4943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0 calico-apiserver-87c886d86- calico-apiserver 627faa7d-aa48-4603-9ed6-cae093344773 783 0 2025-01-29 12:04:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87c886d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-23 calico-apiserver-87c886d86-tj5l2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3ec77b49e65 [] []}} ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.522 [INFO][4943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.609 [INFO][4995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" HandleID="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.635 [INFO][4995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" HandleID="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030d9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-23", "pod":"calico-apiserver-87c886d86-tj5l2", "timestamp":"2025-01-29 12:05:20.609596841 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.636 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.636 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.636 [INFO][4995] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.643 [INFO][4995] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.665 [INFO][4995] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.673 [INFO][4995] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.681 [INFO][4995] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.690 [INFO][4995] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.692 [INFO][4995] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.698 [INFO][4995] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.715 [INFO][4995] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.726 [INFO][4995] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.1/26] block=192.168.76.0/26 handle="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.726 [INFO][4995] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.1/26] handle="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" host="ip-172-31-23-23" Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.726 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:20.771908 containerd[1982]: 2025-01-29 12:05:20.726 [INFO][4995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.1/26] IPv6=[] ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" HandleID="k8s-pod-network.2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.734 [INFO][4943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"627faa7d-aa48-4603-9ed6-cae093344773", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"calico-apiserver-87c886d86-tj5l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ec77b49e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.734 [INFO][4943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.1/32] ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.734 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ec77b49e65 ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.740 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.741 [INFO][4943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"627faa7d-aa48-4603-9ed6-cae093344773", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c", Pod:"calico-apiserver-87c886d86-tj5l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ec77b49e65", MAC:"9a:95:3c:4d:ed:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:20.774101 containerd[1982]: 2025-01-29 12:05:20.765 [INFO][4943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-tj5l2" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:20.867809 systemd-networkd[1826]: calid3e7cb694b0: Link UP Jan 29 12:05:20.872126 systemd-networkd[1826]: calid3e7cb694b0: Gained carrier Jan 29 12:05:20.897231 containerd[1982]: time="2025-01-29T12:05:20.891264358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:20.897231 containerd[1982]: time="2025-01-29T12:05:20.891335015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:20.897231 containerd[1982]: time="2025-01-29T12:05:20.891355739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:20.897231 containerd[1982]: time="2025-01-29T12:05:20.891465670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.546 [INFO][4958] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0 calico-kube-controllers-c7c6ddf45- calico-system 944ed141-f023-4731-9f50-9c286c6b5644 782 0 2025-01-29 12:04:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c7c6ddf45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-23 calico-kube-controllers-c7c6ddf45-r7rfn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3e7cb694b0 [] []}} ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.548 [INFO][4958] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.628 [INFO][4999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" HandleID="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.650 [INFO][4999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" HandleID="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-23", "pod":"calico-kube-controllers-c7c6ddf45-r7rfn", "timestamp":"2025-01-29 12:05:20.62815702 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.650 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.726 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.727 [INFO][4999] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.749 [INFO][4999] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.769 [INFO][4999] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.785 [INFO][4999] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.790 [INFO][4999] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.796 [INFO][4999] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.797 [INFO][4999] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.799 [INFO][4999] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2 Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.819 [INFO][4999] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.836 [INFO][4999] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.2/26] block=192.168.76.0/26 handle="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.836 [INFO][4999] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.2/26] handle="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" host="ip-172-31-23-23" Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.836 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:20.912020 containerd[1982]: 2025-01-29 12:05:20.836 [INFO][4999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.2/26] IPv6=[] ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" HandleID="k8s-pod-network.5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.860 [INFO][4958] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0", GenerateName:"calico-kube-controllers-c7c6ddf45-", Namespace:"calico-system", SelfLink:"", UID:"944ed141-f023-4731-9f50-9c286c6b5644", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7c6ddf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"calico-kube-controllers-c7c6ddf45-r7rfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e7cb694b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.860 [INFO][4958] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.2/32] ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.860 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3e7cb694b0 ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.866 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.868 [INFO][4958] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0", GenerateName:"calico-kube-controllers-c7c6ddf45-", Namespace:"calico-system", SelfLink:"", UID:"944ed141-f023-4731-9f50-9c286c6b5644", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7c6ddf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2", Pod:"calico-kube-controllers-c7c6ddf45-r7rfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e7cb694b0", MAC:"a2:39:ef:5b:74:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:20.913123 containerd[1982]: 2025-01-29 12:05:20.902 [INFO][4958] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2" Namespace="calico-system" Pod="calico-kube-controllers-c7c6ddf45-r7rfn" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:20.949219 systemd[1]: Started cri-containerd-2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c.scope - libcontainer container 2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c. Jan 29 12:05:20.993906 systemd-networkd[1826]: calia66cddfa2a9: Link UP Jan 29 12:05:20.994143 systemd-networkd[1826]: calia66cddfa2a9: Gained carrier Jan 29 12:05:21.013507 containerd[1982]: time="2025-01-29T12:05:21.006971337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:21.013507 containerd[1982]: time="2025-01-29T12:05:21.007042126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:21.013507 containerd[1982]: time="2025-01-29T12:05:21.007068206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:21.021619 containerd[1982]: time="2025-01-29T12:05:21.014542040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.554 [INFO][4971] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0 coredns-668d6bf9bc- kube-system 7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974 781 0 2025-01-29 12:04:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-23 coredns-668d6bf9bc-457dw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia66cddfa2a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.554 [INFO][4971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.643 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" HandleID="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.671 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" HandleID="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039fdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-23", "pod":"coredns-668d6bf9bc-457dw", "timestamp":"2025-01-29 12:05:20.643170694 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.671 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.836 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.837 [INFO][5003] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.850 [INFO][5003] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.885 [INFO][5003] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.905 [INFO][5003] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.910 [INFO][5003] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.917 [INFO][5003] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.919 [INFO][5003] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.923 [INFO][5003] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.945 [INFO][5003] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.974 [INFO][5003] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.3/26] block=192.168.76.0/26 handle="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.974 [INFO][5003] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.3/26] handle="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" host="ip-172-31-23-23" Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.974 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:21.024738 containerd[1982]: 2025-01-29 12:05:20.974 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.3/26] IPv6=[] ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" HandleID="k8s-pod-network.638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:20.985 [INFO][4971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"coredns-668d6bf9bc-457dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia66cddfa2a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:20.986 [INFO][4971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.3/32] ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:20.986 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia66cddfa2a9 ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:20.993 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:20.995 [INFO][4971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f", Pod:"coredns-668d6bf9bc-457dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia66cddfa2a9", MAC:"1a:45:b7:4a:7d:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:21.025868 containerd[1982]: 2025-01-29 12:05:21.020 [INFO][4971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f" Namespace="kube-system" Pod="coredns-668d6bf9bc-457dw" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:21.062083 systemd[1]: Started cri-containerd-5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2.scope - libcontainer container 5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2. Jan 29 12:05:21.096925 containerd[1982]: time="2025-01-29T12:05:21.096001937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:21.096925 containerd[1982]: time="2025-01-29T12:05:21.096090873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:21.096925 containerd[1982]: time="2025-01-29T12:05:21.096113426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:21.099281 containerd[1982]: time="2025-01-29T12:05:21.099067262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:21.181867 systemd[1]: Started cri-containerd-638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f.scope - libcontainer container 638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f. Jan 29 12:05:21.198435 containerd[1982]: time="2025-01-29T12:05:21.197857500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-tj5l2,Uid:627faa7d-aa48-4603-9ed6-cae093344773,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c\"" Jan 29 12:05:21.205343 containerd[1982]: time="2025-01-29T12:05:21.204461786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:05:21.215670 containerd[1982]: time="2025-01-29T12:05:21.215142726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7c6ddf45-r7rfn,Uid:944ed141-f023-4731-9f50-9c286c6b5644,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2\"" Jan 29 12:05:21.292866 containerd[1982]: time="2025-01-29T12:05:21.292825195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-457dw,Uid:7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974,Namespace:kube-system,Attempt:1,} returns sandbox id \"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f\"" Jan 29 12:05:21.298796 containerd[1982]: time="2025-01-29T12:05:21.298363775Z" level=info msg="CreateContainer within sandbox \"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:21.336792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213173289.mount: Deactivated successfully. Jan 29 12:05:21.348593 containerd[1982]: time="2025-01-29T12:05:21.348551888Z" level=info msg="CreateContainer within sandbox \"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cb1566a5a314f13d82a7f3bc1da2ef0b474ae70997653c3e2fe4e54f8852dcc\"" Jan 29 12:05:21.349813 containerd[1982]: time="2025-01-29T12:05:21.349772711Z" level=info msg="StartContainer for \"7cb1566a5a314f13d82a7f3bc1da2ef0b474ae70997653c3e2fe4e54f8852dcc\"" Jan 29 12:05:21.395759 systemd[1]: Started cri-containerd-7cb1566a5a314f13d82a7f3bc1da2ef0b474ae70997653c3e2fe4e54f8852dcc.scope - libcontainer container 7cb1566a5a314f13d82a7f3bc1da2ef0b474ae70997653c3e2fe4e54f8852dcc. Jan 29 12:05:21.439711 containerd[1982]: time="2025-01-29T12:05:21.439664875Z" level=info msg="StartContainer for \"7cb1566a5a314f13d82a7f3bc1da2ef0b474ae70997653c3e2fe4e54f8852dcc\" returns successfully" Jan 29 12:05:21.450370 containerd[1982]: time="2025-01-29T12:05:21.450186731Z" level=info msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" Jan 29 12:05:21.450724 containerd[1982]: time="2025-01-29T12:05:21.450314999Z" level=info msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" Jan 29 12:05:21.593568 systemd-networkd[1826]: vxlan.calico: Gained IPv6LL Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.627 [INFO][5229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.636 [INFO][5229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" iface="eth0" netns="/var/run/netns/cni-e615d796-4913-2e80-fb40-622096c19bd8" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.636 [INFO][5229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" iface="eth0" netns="/var/run/netns/cni-e615d796-4913-2e80-fb40-622096c19bd8" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.637 [INFO][5229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" iface="eth0" netns="/var/run/netns/cni-e615d796-4913-2e80-fb40-622096c19bd8" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.637 [INFO][5229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.639 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.775 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.776 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.776 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.789 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.789 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.792 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:21.802634 containerd[1982]: 2025-01-29 12:05:21.796 [INFO][5229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:21.807778 containerd[1982]: time="2025-01-29T12:05:21.803311555Z" level=info msg="TearDown network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" successfully" Jan 29 12:05:21.807778 containerd[1982]: time="2025-01-29T12:05:21.803341458Z" level=info msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" returns successfully" Jan 29 12:05:21.807778 containerd[1982]: time="2025-01-29T12:05:21.804402196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trscl,Uid:4e2f50f1-0e66-49b9-bbb2-5bccda0cafee,Namespace:kube-system,Attempt:1,}" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.694 [INFO][5231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.694 [INFO][5231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" iface="eth0" netns="/var/run/netns/cni-d349ba58-a91a-d5ff-72d5-3192d027118d" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.695 [INFO][5231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" iface="eth0" netns="/var/run/netns/cni-d349ba58-a91a-d5ff-72d5-3192d027118d" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.695 [INFO][5231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" iface="eth0" netns="/var/run/netns/cni-d349ba58-a91a-d5ff-72d5-3192d027118d" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.695 [INFO][5231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.695 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.806 [INFO][5251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.807 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.808 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.821 [WARNING][5251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.821 [INFO][5251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.825 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:21.834971 containerd[1982]: 2025-01-29 12:05:21.829 [INFO][5231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:21.836181 containerd[1982]: time="2025-01-29T12:05:21.835606482Z" level=info msg="TearDown network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" successfully" Jan 29 12:05:21.836181 containerd[1982]: time="2025-01-29T12:05:21.835649111Z" level=info msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" returns successfully" Jan 29 12:05:21.839515 containerd[1982]: time="2025-01-29T12:05:21.839116013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-cz5vh,Uid:560b730b-9376-42a3-8fbf-e9fdd620b24a,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:05:21.842697 systemd-networkd[1826]: cali3ec77b49e65: Gained IPv6LL Jan 29 12:05:22.205919 systemd-networkd[1826]: cali004bfd851bc: Link UP Jan 29 12:05:22.208003 systemd-networkd[1826]: cali004bfd851bc: Gained carrier Jan 29 12:05:22.227610 kubelet[3452]: I0129 12:05:22.227541 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-457dw" podStartSLOduration=39.227515037 podStartE2EDuration="39.227515037s" podCreationTimestamp="2025-01-29 12:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:22.070959917 +0000 UTC m=+44.927555948" watchObservedRunningTime="2025-01-29 12:05:22.227515037 +0000 UTC m=+45.084111055" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:21.910 [INFO][5260] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0 coredns-668d6bf9bc- kube-system 4e2f50f1-0e66-49b9-bbb2-5bccda0cafee 803 0 2025-01-29 12:04:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-23 coredns-668d6bf9bc-trscl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali004bfd851bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:21.910 [INFO][5260] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.026 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" HandleID="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.052 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" HandleID="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000286c60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-23", "pod":"coredns-668d6bf9bc-trscl", "timestamp":"2025-01-29 12:05:22.026266465 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.052 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.052 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.052 [INFO][5284] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.059 [INFO][5284] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.148 [INFO][5284] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.162 [INFO][5284] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.165 [INFO][5284] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.169 [INFO][5284] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.169 [INFO][5284] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.172 [INFO][5284] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17 Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.181 [INFO][5284] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.194 [INFO][5284] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.4/26] block=192.168.76.0/26 handle="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.194 [INFO][5284] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.4/26] handle="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" host="ip-172-31-23-23" Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.194 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:22.234623 containerd[1982]: 2025-01-29 12:05:22.194 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.4/26] IPv6=[] ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" HandleID="k8s-pod-network.10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.198 [INFO][5260] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"coredns-668d6bf9bc-trscl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004bfd851bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.198 [INFO][5260] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.4/32] ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.198 [INFO][5260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali004bfd851bc ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.209 [INFO][5260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.211 [INFO][5260] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17", Pod:"coredns-668d6bf9bc-trscl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004bfd851bc", MAC:"ea:23:35:ae:1f:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:22.236337 containerd[1982]: 2025-01-29 12:05:22.230 [INFO][5260] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17" Namespace="kube-system" Pod="coredns-668d6bf9bc-trscl" WorkloadEndpoint="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:22.260014 systemd[1]: run-netns-cni\x2dd349ba58\x2da91a\x2dd5ff\x2d72d5\x2d3192d027118d.mount: Deactivated successfully. Jan 29 12:05:22.260150 systemd[1]: run-netns-cni\x2de615d796\x2d4913\x2d2e80\x2dfb40\x2d622096c19bd8.mount: Deactivated successfully. Jan 29 12:05:22.296726 containerd[1982]: time="2025-01-29T12:05:22.296618487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:22.297360 containerd[1982]: time="2025-01-29T12:05:22.297293150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:22.298133 containerd[1982]: time="2025-01-29T12:05:22.298063888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:22.298890 containerd[1982]: time="2025-01-29T12:05:22.298741188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:22.354825 systemd-networkd[1826]: calia66cddfa2a9: Gained IPv6LL Jan 29 12:05:22.363803 systemd[1]: Started cri-containerd-10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17.scope - libcontainer container 10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17. Jan 29 12:05:22.450905 containerd[1982]: time="2025-01-29T12:05:22.450865002Z" level=info msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" Jan 29 12:05:22.455864 systemd-networkd[1826]: calib6a1f23bead: Link UP Jan 29 12:05:22.458464 systemd-networkd[1826]: calib6a1f23bead: Gained carrier Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:21.966 [INFO][5272] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0 calico-apiserver-87c886d86- calico-apiserver 560b730b-9376-42a3-8fbf-e9fdd620b24a 804 0 2025-01-29 12:04:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87c886d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-23 calico-apiserver-87c886d86-cz5vh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6a1f23bead [] []}} ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:21.966 [INFO][5272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.081 [INFO][5290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" HandleID="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.157 [INFO][5290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" HandleID="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cdf20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-23", "pod":"calico-apiserver-87c886d86-cz5vh", "timestamp":"2025-01-29 12:05:22.081053641 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.157 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.194 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.195 [INFO][5290] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.203 [INFO][5290] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.270 [INFO][5290] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.348 [INFO][5290] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.358 [INFO][5290] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.388 [INFO][5290] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.388 [INFO][5290] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.394 [INFO][5290] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.408 [INFO][5290] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.436 [INFO][5290] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.5/26] block=192.168.76.0/26 handle="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.436 [INFO][5290] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.5/26] handle="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" host="ip-172-31-23-23" Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.437 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:22.527610 containerd[1982]: 2025-01-29 12:05:22.437 [INFO][5290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.5/26] IPv6=[] ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" HandleID="k8s-pod-network.054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.445 [INFO][5272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"560b730b-9376-42a3-8fbf-e9fdd620b24a", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"calico-apiserver-87c886d86-cz5vh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6a1f23bead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.446 [INFO][5272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.5/32] ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.446 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6a1f23bead ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.458 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.464 [INFO][5272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"560b730b-9376-42a3-8fbf-e9fdd620b24a", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff", Pod:"calico-apiserver-87c886d86-cz5vh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6a1f23bead", MAC:"22:3a:93:d4:b9:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:22.528701 containerd[1982]: 2025-01-29 12:05:22.520 [INFO][5272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff" Namespace="calico-apiserver" Pod="calico-apiserver-87c886d86-cz5vh" WorkloadEndpoint="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:22.633585 containerd[1982]: time="2025-01-29T12:05:22.633078441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trscl,Uid:4e2f50f1-0e66-49b9-bbb2-5bccda0cafee,Namespace:kube-system,Attempt:1,} returns sandbox id \"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17\"" Jan 29 12:05:22.639823 containerd[1982]: time="2025-01-29T12:05:22.639767463Z" level=info msg="CreateContainer within sandbox \"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:22.672633 containerd[1982]: time="2025-01-29T12:05:22.671859662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:22.672633 containerd[1982]: time="2025-01-29T12:05:22.671951632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:22.672633 containerd[1982]: time="2025-01-29T12:05:22.671971441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:22.672633 containerd[1982]: time="2025-01-29T12:05:22.672085979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:22.686526 containerd[1982]: time="2025-01-29T12:05:22.684016612Z" level=info msg="CreateContainer within sandbox \"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7deb5e0365b984d4a66eb4ef4da97d3583ac9f1375b5152d956882d0e2accc0\"" Jan 29 12:05:22.687508 containerd[1982]: time="2025-01-29T12:05:22.687012195Z" level=info msg="StartContainer for \"d7deb5e0365b984d4a66eb4ef4da97d3583ac9f1375b5152d956882d0e2accc0\"" Jan 29 12:05:22.713778 systemd[1]: Started cri-containerd-054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff.scope - libcontainer container 054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff. Jan 29 12:05:22.789476 systemd[1]: Started cri-containerd-d7deb5e0365b984d4a66eb4ef4da97d3583ac9f1375b5152d956882d0e2accc0.scope - libcontainer container d7deb5e0365b984d4a66eb4ef4da97d3583ac9f1375b5152d956882d0e2accc0. Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.703 [INFO][5361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.705 [INFO][5361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" iface="eth0" netns="/var/run/netns/cni-a0a2f01a-e8ec-5163-6196-681cfa0b8d6c" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.707 [INFO][5361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" iface="eth0" netns="/var/run/netns/cni-a0a2f01a-e8ec-5163-6196-681cfa0b8d6c" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.707 [INFO][5361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" iface="eth0" netns="/var/run/netns/cni-a0a2f01a-e8ec-5163-6196-681cfa0b8d6c" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.707 [INFO][5361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.707 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.824 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.824 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.824 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.834 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.834 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.840 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:22.856920 containerd[1982]: 2025-01-29 12:05:22.846 [INFO][5361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:22.862162 containerd[1982]: time="2025-01-29T12:05:22.857441084Z" level=info msg="TearDown network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" successfully" Jan 29 12:05:22.862351 containerd[1982]: time="2025-01-29T12:05:22.862318132Z" level=info msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" returns successfully" Jan 29 12:05:22.871644 containerd[1982]: time="2025-01-29T12:05:22.871455915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2lqs,Uid:cea11a14-767a-481c-bdf8-160a5f9f8aed,Namespace:calico-system,Attempt:1,}" Jan 29 12:05:22.930813 systemd-networkd[1826]: calid3e7cb694b0: Gained IPv6LL Jan 29 12:05:23.003338 containerd[1982]: time="2025-01-29T12:05:23.002968770Z" level=info msg="StartContainer for \"d7deb5e0365b984d4a66eb4ef4da97d3583ac9f1375b5152d956882d0e2accc0\" returns successfully" Jan 29 12:05:23.228683 kubelet[3452]: I0129 12:05:23.228335 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-trscl" podStartSLOduration=40.22831154 podStartE2EDuration="40.22831154s" podCreationTimestamp="2025-01-29 12:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:23.128653069 +0000 UTC m=+45.985249090" watchObservedRunningTime="2025-01-29 12:05:23.22831154 +0000 UTC m=+46.084907560" Jan 29 12:05:23.232531 containerd[1982]: time="2025-01-29T12:05:23.231542614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87c886d86-cz5vh,Uid:560b730b-9376-42a3-8fbf-e9fdd620b24a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff\"" Jan 29 12:05:23.263431 systemd[1]: run-netns-cni\x2da0a2f01a\x2de8ec\x2d5163\x2d6196\x2d681cfa0b8d6c.mount: Deactivated successfully. Jan 29 12:05:23.407658 systemd-networkd[1826]: calid3edb94721a: Link UP Jan 29 12:05:23.407951 systemd-networkd[1826]: calid3edb94721a: Gained carrier Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.111 [INFO][5454] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0 csi-node-driver- calico-system cea11a14-767a-481c-bdf8-160a5f9f8aed 820 0 2025-01-29 12:04:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-23 csi-node-driver-b2lqs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid3edb94721a [] []}} ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.112 [INFO][5454] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.261 [INFO][5480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" HandleID="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.294 [INFO][5480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" HandleID="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-23", "pod":"csi-node-driver-b2lqs", "timestamp":"2025-01-29 12:05:23.261915601 +0000 UTC"}, Hostname:"ip-172-31-23-23", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.295 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.295 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.295 [INFO][5480] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-23' Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.299 [INFO][5480] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.321 [INFO][5480] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.335 [INFO][5480] ipam/ipam.go 489: Trying affinity for 192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.344 [INFO][5480] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.349 [INFO][5480] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.349 [INFO][5480] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.353 [INFO][5480] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14 Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.376 [INFO][5480] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.389 [INFO][5480] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.6/26] block=192.168.76.0/26 handle="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.389 [INFO][5480] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.6/26] handle="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" host="ip-172-31-23-23" Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.389 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:23.452992 containerd[1982]: 2025-01-29 12:05:23.389 [INFO][5480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.6/26] IPv6=[] ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" HandleID="k8s-pod-network.5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.399 [INFO][5454] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea11a14-767a-481c-bdf8-160a5f9f8aed", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"", Pod:"csi-node-driver-b2lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3edb94721a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.399 [INFO][5454] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.6/32] ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.400 [INFO][5454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3edb94721a ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.407 [INFO][5454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.409 [INFO][5454] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea11a14-767a-481c-bdf8-160a5f9f8aed", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14", Pod:"csi-node-driver-b2lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3edb94721a", MAC:"c6:2b:3c:59:59:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:23.455421 containerd[1982]: 2025-01-29 12:05:23.440 [INFO][5454] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14" Namespace="calico-system" Pod="csi-node-driver-b2lqs" WorkloadEndpoint="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:23.618165 containerd[1982]: time="2025-01-29T12:05:23.616867244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:23.618165 containerd[1982]: time="2025-01-29T12:05:23.616936124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:23.618165 containerd[1982]: time="2025-01-29T12:05:23.616954307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:23.618165 containerd[1982]: time="2025-01-29T12:05:23.617059188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:23.683723 systemd[1]: Started cri-containerd-5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14.scope - libcontainer container 5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14. Jan 29 12:05:23.765588 containerd[1982]: time="2025-01-29T12:05:23.765080606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2lqs,Uid:cea11a14-767a-481c-bdf8-160a5f9f8aed,Namespace:calico-system,Attempt:1,} returns sandbox id \"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14\"" Jan 29 12:05:23.826883 systemd-networkd[1826]: calib6a1f23bead: Gained IPv6LL Jan 29 12:05:23.956718 systemd-networkd[1826]: cali004bfd851bc: Gained IPv6LL Jan 29 12:05:25.159555 containerd[1982]: time="2025-01-29T12:05:25.159504283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:25.162038 containerd[1982]: time="2025-01-29T12:05:25.161520625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:05:25.164176 containerd[1982]: time="2025-01-29T12:05:25.163822783Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:25.168057 containerd[1982]: time="2025-01-29T12:05:25.168006955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:25.169007 containerd[1982]: time="2025-01-29T12:05:25.168967267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.963888709s" Jan 29 12:05:25.169251 containerd[1982]: time="2025-01-29T12:05:25.169140166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:05:25.170863 containerd[1982]: time="2025-01-29T12:05:25.170614138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:05:25.172412 containerd[1982]: time="2025-01-29T12:05:25.172289940Z" level=info msg="CreateContainer within sandbox \"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:05:25.234703 systemd-networkd[1826]: calid3edb94721a: Gained IPv6LL Jan 29 12:05:25.235825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229362410.mount: Deactivated successfully. Jan 29 12:05:25.244640 containerd[1982]: time="2025-01-29T12:05:25.239199123Z" level=info msg="CreateContainer within sandbox \"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d\"" Jan 29 12:05:25.249048 containerd[1982]: time="2025-01-29T12:05:25.247968542Z" level=info msg="StartContainer for \"ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d\"" Jan 29 12:05:25.408632 systemd[1]: run-containerd-runc-k8s.io-ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d-runc.Bs4TeJ.mount: Deactivated successfully. Jan 29 12:05:25.437883 systemd[1]: Started cri-containerd-ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d.scope - libcontainer container ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d. Jan 29 12:05:25.554737 containerd[1982]: time="2025-01-29T12:05:25.554660509Z" level=info msg="StartContainer for \"ab8434be4cdb00e1eb172b563abd47531c865e15e3ef8dc8ddb31b654c7aa77d\" returns successfully" Jan 29 12:05:26.143875 kubelet[3452]: I0129 12:05:26.142698 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87c886d86-tj5l2" podStartSLOduration=31.174069469 podStartE2EDuration="35.142646506s" podCreationTimestamp="2025-01-29 12:04:51 +0000 UTC" firstStartedPulling="2025-01-29 12:05:21.201816778 +0000 UTC m=+44.058412774" lastFinishedPulling="2025-01-29 12:05:25.170393795 +0000 UTC m=+48.026989811" observedRunningTime="2025-01-29 12:05:26.139371605 +0000 UTC m=+48.995967624" watchObservedRunningTime="2025-01-29 12:05:26.142646506 +0000 UTC m=+48.999242525" Jan 29 12:05:27.495880 ntpd[1956]: Listen normally on 7 vxlan.calico 192.168.76.0:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 7 vxlan.calico 192.168.76.0:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 8 vxlan.calico [fe80::64fc:cdff:fee6:5817%4]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 9 cali3ec77b49e65 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 10 calid3e7cb694b0 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 11 calia66cddfa2a9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 12 cali004bfd851bc [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 13 calib6a1f23bead [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:05:27.499854 ntpd[1956]: 29 Jan 12:05:27 ntpd[1956]: Listen normally on 14 calid3edb94721a [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:05:27.496049 ntpd[1956]: Listen normally on 8 vxlan.calico [fe80::64fc:cdff:fee6:5817%4]:123 Jan 29 12:05:27.496109 ntpd[1956]: Listen normally on 9 cali3ec77b49e65 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 12:05:27.496148 ntpd[1956]: Listen normally on 10 calid3e7cb694b0 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 12:05:27.496186 ntpd[1956]: Listen normally on 11 calia66cddfa2a9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:05:27.496223 ntpd[1956]: Listen normally on 12 cali004bfd851bc [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:05:27.496260 ntpd[1956]: Listen normally on 13 calib6a1f23bead [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:05:27.496526 ntpd[1956]: Listen normally on 14 calid3edb94721a [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:05:27.573032 systemd[1]: Started sshd@7-172.31.23.23:22-139.178.68.195:40462.service - OpenSSH per-connection server daemon (139.178.68.195:40462). Jan 29 12:05:27.884592 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 40462 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:27.908266 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:27.943388 systemd-logind[1964]: New session 8 of user core. Jan 29 12:05:27.944924 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:05:29.354724 containerd[1982]: time="2025-01-29T12:05:29.354573927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:29.366668 containerd[1982]: time="2025-01-29T12:05:29.366510267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:05:29.375642 containerd[1982]: time="2025-01-29T12:05:29.374448727Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:29.389693 containerd[1982]: time="2025-01-29T12:05:29.389623606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:29.391735 containerd[1982]: time="2025-01-29T12:05:29.391689545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.221037626s" Jan 29 12:05:29.392278 containerd[1982]: time="2025-01-29T12:05:29.391909789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:05:29.394845 containerd[1982]: time="2025-01-29T12:05:29.394315200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:05:29.522957 containerd[1982]: time="2025-01-29T12:05:29.522673528Z" level=info msg="CreateContainer within sandbox \"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:05:29.585081 containerd[1982]: time="2025-01-29T12:05:29.584981508Z" level=info msg="CreateContainer within sandbox \"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6\"" Jan 29 12:05:29.587531 containerd[1982]: time="2025-01-29T12:05:29.585801226Z" level=info msg="StartContainer for \"47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6\"" Jan 29 12:05:29.627939 sshd[5615]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:29.635738 systemd[1]: sshd@7-172.31.23.23:22-139.178.68.195:40462.service: Deactivated successfully. Jan 29 12:05:29.640072 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:05:29.644128 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:05:29.646249 systemd-logind[1964]: Removed session 8. Jan 29 12:05:29.680164 systemd[1]: Started cri-containerd-47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6.scope - libcontainer container 47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6. Jan 29 12:05:29.764944 containerd[1982]: time="2025-01-29T12:05:29.764888057Z" level=info msg="StartContainer for \"47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6\" returns successfully" Jan 29 12:05:29.793159 containerd[1982]: time="2025-01-29T12:05:29.793108736Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:29.795345 containerd[1982]: time="2025-01-29T12:05:29.795292940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:05:29.802712 containerd[1982]: time="2025-01-29T12:05:29.802657692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 408.300834ms" Jan 29 12:05:29.802712 containerd[1982]: time="2025-01-29T12:05:29.802712726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:05:29.804320 containerd[1982]: time="2025-01-29T12:05:29.803935269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:05:29.807201 containerd[1982]: time="2025-01-29T12:05:29.807154971Z" level=info msg="CreateContainer within sandbox \"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:05:29.858135 containerd[1982]: time="2025-01-29T12:05:29.854913416Z" level=info msg="CreateContainer within sandbox \"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a2b0073b4972a4973229f094dfa19f0f039b6c5ec649f25e427db67c9046d3c5\"" Jan 29 12:05:29.912587 containerd[1982]: time="2025-01-29T12:05:29.910475788Z" level=info msg="StartContainer for \"a2b0073b4972a4973229f094dfa19f0f039b6c5ec649f25e427db67c9046d3c5\"" Jan 29 12:05:29.985376 systemd[1]: Started cri-containerd-a2b0073b4972a4973229f094dfa19f0f039b6c5ec649f25e427db67c9046d3c5.scope - libcontainer container a2b0073b4972a4973229f094dfa19f0f039b6c5ec649f25e427db67c9046d3c5. Jan 29 12:05:30.117815 containerd[1982]: time="2025-01-29T12:05:30.117767829Z" level=info msg="StartContainer for \"a2b0073b4972a4973229f094dfa19f0f039b6c5ec649f25e427db67c9046d3c5\" returns successfully" Jan 29 12:05:30.265057 kubelet[3452]: I0129 12:05:30.264652 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87c886d86-cz5vh" podStartSLOduration=32.699535664 podStartE2EDuration="39.264619957s" podCreationTimestamp="2025-01-29 12:04:51 +0000 UTC" firstStartedPulling="2025-01-29 12:05:23.238609807 +0000 UTC m=+46.095205807" lastFinishedPulling="2025-01-29 12:05:29.803694087 +0000 UTC m=+52.660290100" observedRunningTime="2025-01-29 12:05:30.221849279 +0000 UTC m=+53.078445294" watchObservedRunningTime="2025-01-29 12:05:30.264619957 +0000 UTC m=+53.121215975" Jan 29 12:05:30.401020 kubelet[3452]: I0129 12:05:30.399406 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c7c6ddf45-r7rfn" podStartSLOduration=30.225689191 podStartE2EDuration="38.39938227s" podCreationTimestamp="2025-01-29 12:04:52 +0000 UTC" firstStartedPulling="2025-01-29 12:05:21.220398463 +0000 UTC m=+44.076994459" lastFinishedPulling="2025-01-29 12:05:29.394091535 +0000 UTC m=+52.250687538" observedRunningTime="2025-01-29 12:05:30.268243977 +0000 UTC m=+53.124839997" watchObservedRunningTime="2025-01-29 12:05:30.39938227 +0000 UTC m=+53.255978289" Jan 29 12:05:30.421387 systemd[1]: run-containerd-runc-k8s.io-47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6-runc.Bi4bPH.mount: Deactivated successfully. Jan 29 12:05:31.193252 kubelet[3452]: I0129 12:05:31.192360 3452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:31.960892 containerd[1982]: time="2025-01-29T12:05:31.960843440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:31.964107 containerd[1982]: time="2025-01-29T12:05:31.963192958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:05:31.965003 containerd[1982]: time="2025-01-29T12:05:31.964365380Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:31.969890 containerd[1982]: time="2025-01-29T12:05:31.969753181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:31.971270 containerd[1982]: time="2025-01-29T12:05:31.971219116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.167245199s" Jan 29 12:05:31.971270 containerd[1982]: time="2025-01-29T12:05:31.971271654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:05:31.995312 containerd[1982]: time="2025-01-29T12:05:31.995079846Z" level=info msg="CreateContainer within sandbox \"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:05:32.038789 containerd[1982]: time="2025-01-29T12:05:32.038741203Z" level=info msg="CreateContainer within sandbox \"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538\"" Jan 29 12:05:32.039536 containerd[1982]: time="2025-01-29T12:05:32.039335077Z" level=info msg="StartContainer for \"78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538\"" Jan 29 12:05:32.092464 systemd[1]: run-containerd-runc-k8s.io-78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538-runc.f1JnYp.mount: Deactivated successfully. Jan 29 12:05:32.101822 systemd[1]: Started cri-containerd-78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538.scope - libcontainer container 78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538. Jan 29 12:05:32.215202 containerd[1982]: time="2025-01-29T12:05:32.211372993Z" level=info msg="StartContainer for \"78f4299b22d84348e53d67425813ffdaeaa909aabd3a989796f88ae27430d538\" returns successfully" Jan 29 12:05:32.218720 containerd[1982]: time="2025-01-29T12:05:32.218677481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:05:33.899508 containerd[1982]: time="2025-01-29T12:05:33.899447108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:33.900786 containerd[1982]: time="2025-01-29T12:05:33.900513751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:05:33.902578 containerd[1982]: time="2025-01-29T12:05:33.901786572Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:33.925988 containerd[1982]: time="2025-01-29T12:05:33.925871473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:33.947659 containerd[1982]: time="2025-01-29T12:05:33.947606661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.728877883s" Jan 29 12:05:33.947659 containerd[1982]: time="2025-01-29T12:05:33.947663370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:05:33.954544 containerd[1982]: time="2025-01-29T12:05:33.954377284Z" level=info msg="CreateContainer within sandbox \"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:05:33.997547 containerd[1982]: time="2025-01-29T12:05:33.997342547Z" level=info msg="CreateContainer within sandbox \"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4c561207f1478c89b77addcea5c0579bbf50e05b67f5001fb0ed9b929c90218f\"" Jan 29 12:05:34.000753 containerd[1982]: time="2025-01-29T12:05:34.000311852Z" level=info msg="StartContainer for \"4c561207f1478c89b77addcea5c0579bbf50e05b67f5001fb0ed9b929c90218f\"" Jan 29 12:05:34.078746 systemd[1]: Started cri-containerd-4c561207f1478c89b77addcea5c0579bbf50e05b67f5001fb0ed9b929c90218f.scope - libcontainer container 4c561207f1478c89b77addcea5c0579bbf50e05b67f5001fb0ed9b929c90218f. Jan 29 12:05:34.116259 containerd[1982]: time="2025-01-29T12:05:34.116203662Z" level=info msg="StartContainer for \"4c561207f1478c89b77addcea5c0579bbf50e05b67f5001fb0ed9b929c90218f\" returns successfully" Jan 29 12:05:34.685720 systemd[1]: Started sshd@8-172.31.23.23:22-139.178.68.195:40472.service - OpenSSH per-connection server daemon (139.178.68.195:40472). Jan 29 12:05:34.946796 sshd[5820]: Accepted publickey for core from 139.178.68.195 port 40472 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:34.954294 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:34.970543 systemd-logind[1964]: New session 9 of user core. Jan 29 12:05:34.974859 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:05:35.057217 kubelet[3452]: I0129 12:05:35.052785 3452 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:05:35.096788 kubelet[3452]: I0129 12:05:35.090133 3452 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:05:35.888947 sshd[5820]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:35.893228 systemd[1]: sshd@8-172.31.23.23:22-139.178.68.195:40472.service: Deactivated successfully. Jan 29 12:05:35.896072 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:05:35.901959 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:05:35.906470 systemd-logind[1964]: Removed session 9. Jan 29 12:05:37.483897 containerd[1982]: time="2025-01-29T12:05:37.483580163Z" level=info msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.799 [WARNING][5847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17", Pod:"coredns-668d6bf9bc-trscl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004bfd851bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.801 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.801 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" iface="eth0" netns="" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.801 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.801 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.839 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.839 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.839 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.848 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.849 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.852 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:37.857670 containerd[1982]: 2025-01-29 12:05:37.855 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.857670 containerd[1982]: time="2025-01-29T12:05:37.857514334Z" level=info msg="TearDown network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" successfully" Jan 29 12:05:37.857670 containerd[1982]: time="2025-01-29T12:05:37.857546137Z" level=info msg="StopPodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" returns successfully" Jan 29 12:05:37.891356 containerd[1982]: time="2025-01-29T12:05:37.891292408Z" level=info msg="RemovePodSandbox for \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" Jan 29 12:05:37.891356 containerd[1982]: time="2025-01-29T12:05:37.891350390Z" level=info msg="Forcibly stopping sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\"" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.944 [WARNING][5872] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e2f50f1-0e66-49b9-bbb2-5bccda0cafee", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"10ba36b3291fb9e5c8aebbdab2b13be499dd24a07af093edfbf71daf2b5f6c17", Pod:"coredns-668d6bf9bc-trscl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004bfd851bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.944 [INFO][5872] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.944 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" iface="eth0" netns="" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.944 [INFO][5872] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.945 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.972 [INFO][5878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.973 [INFO][5878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.973 [INFO][5878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.980 [WARNING][5878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.980 [INFO][5878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" HandleID="k8s-pod-network.3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--trscl-eth0" Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.981 [INFO][5878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:37.987780 containerd[1982]: 2025-01-29 12:05:37.983 [INFO][5872] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952" Jan 29 12:05:37.990057 containerd[1982]: time="2025-01-29T12:05:37.987825334Z" level=info msg="TearDown network for sandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" successfully" Jan 29 12:05:38.026414 containerd[1982]: time="2025-01-29T12:05:38.026046344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:38.048126 containerd[1982]: time="2025-01-29T12:05:38.048076884Z" level=info msg="RemovePodSandbox \"3d3b393412bf386878e17ecf950908e5d390bd9718cc9afdddcbedbc111f5952\" returns successfully" Jan 29 12:05:38.060815 containerd[1982]: time="2025-01-29T12:05:38.060770033Z" level=info msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.106 [WARNING][5898] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"560b730b-9376-42a3-8fbf-e9fdd620b24a", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff", Pod:"calico-apiserver-87c886d86-cz5vh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6a1f23bead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.107 [INFO][5898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.107 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" iface="eth0" netns="" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.107 [INFO][5898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.107 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.134 [INFO][5905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.134 [INFO][5905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.135 [INFO][5905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.142 [WARNING][5905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.142 [INFO][5905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.144 [INFO][5905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.149654 containerd[1982]: 2025-01-29 12:05:38.146 [INFO][5898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.149654 containerd[1982]: time="2025-01-29T12:05:38.148383977Z" level=info msg="TearDown network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" successfully" Jan 29 12:05:38.149654 containerd[1982]: time="2025-01-29T12:05:38.148414723Z" level=info msg="StopPodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" returns successfully" Jan 29 12:05:38.149654 containerd[1982]: time="2025-01-29T12:05:38.149028134Z" level=info msg="RemovePodSandbox for \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" Jan 29 12:05:38.149654 containerd[1982]: time="2025-01-29T12:05:38.149062342Z" level=info msg="Forcibly stopping sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\"" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.201 [WARNING][5924] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"560b730b-9376-42a3-8fbf-e9fdd620b24a", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"054d50d1d06924e8ca890025193af22db4e41cf09e6b9f67cc28b98e5a0f1bff", Pod:"calico-apiserver-87c886d86-cz5vh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6a1f23bead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.201 [INFO][5924] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.201 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" iface="eth0" netns="" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.201 [INFO][5924] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.201 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.233 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.233 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.233 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.239 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.239 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" HandleID="k8s-pod-network.d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--cz5vh-eth0" Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.243 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.254229 containerd[1982]: 2025-01-29 12:05:38.251 [INFO][5924] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5" Jan 29 12:05:38.255243 containerd[1982]: time="2025-01-29T12:05:38.254269814Z" level=info msg="TearDown network for sandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" successfully" Jan 29 12:05:38.261866 containerd[1982]: time="2025-01-29T12:05:38.261813110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:38.262117 containerd[1982]: time="2025-01-29T12:05:38.262001564Z" level=info msg="RemovePodSandbox \"d492a46830b885e6d5ca228ea9b68f747a9d10bcd565b9b9d1f0f2cc58d01bd5\" returns successfully" Jan 29 12:05:38.264422 containerd[1982]: time="2025-01-29T12:05:38.264222219Z" level=info msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.321 [WARNING][5948] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0", GenerateName:"calico-kube-controllers-c7c6ddf45-", Namespace:"calico-system", SelfLink:"", UID:"944ed141-f023-4731-9f50-9c286c6b5644", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7c6ddf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2", Pod:"calico-kube-controllers-c7c6ddf45-r7rfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e7cb694b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.322 [INFO][5948] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.322 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" iface="eth0" netns="" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.322 [INFO][5948] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.322 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.354 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.354 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.354 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.361 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.361 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.363 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.367157 containerd[1982]: 2025-01-29 12:05:38.365 [INFO][5948] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.368114 containerd[1982]: time="2025-01-29T12:05:38.367203190Z" level=info msg="TearDown network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" successfully" Jan 29 12:05:38.368114 containerd[1982]: time="2025-01-29T12:05:38.367236419Z" level=info msg="StopPodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" returns successfully" Jan 29 12:05:38.368366 containerd[1982]: time="2025-01-29T12:05:38.368335588Z" level=info msg="RemovePodSandbox for \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" Jan 29 12:05:38.368444 containerd[1982]: time="2025-01-29T12:05:38.368370195Z" level=info msg="Forcibly stopping sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\"" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.448 [WARNING][5972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0", GenerateName:"calico-kube-controllers-c7c6ddf45-", Namespace:"calico-system", SelfLink:"", UID:"944ed141-f023-4731-9f50-9c286c6b5644", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7c6ddf45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5f55fb4070c9bdb70105a8c621caf3aa1185ea0158bafb69ddc46ea3284527a2", Pod:"calico-kube-controllers-c7c6ddf45-r7rfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e7cb694b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.448 [INFO][5972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.448 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" iface="eth0" netns="" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.448 [INFO][5972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.448 [INFO][5972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.480 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.481 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.481 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.489 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.489 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" HandleID="k8s-pod-network.3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Workload="ip--172--31--23--23-k8s-calico--kube--controllers--c7c6ddf45--r7rfn-eth0" Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.491 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.496459 containerd[1982]: 2025-01-29 12:05:38.494 [INFO][5972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad" Jan 29 12:05:38.496459 containerd[1982]: time="2025-01-29T12:05:38.496413734Z" level=info msg="TearDown network for sandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" successfully" Jan 29 12:05:38.504575 containerd[1982]: time="2025-01-29T12:05:38.504285161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:38.504575 containerd[1982]: time="2025-01-29T12:05:38.504574313Z" level=info msg="RemovePodSandbox \"3e76ba69be7e2648e4b1b27d695fdcf766f6e2d3f2a577d4f0946a8e159f4cad\" returns successfully" Jan 29 12:05:38.505576 containerd[1982]: time="2025-01-29T12:05:38.505154525Z" level=info msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.554 [WARNING][5996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"627faa7d-aa48-4603-9ed6-cae093344773", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c", Pod:"calico-apiserver-87c886d86-tj5l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ec77b49e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.555 [INFO][5996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.555 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" iface="eth0" netns="" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.555 [INFO][5996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.555 [INFO][5996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.585 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.585 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.585 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.593 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.593 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.596 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.600060 containerd[1982]: 2025-01-29 12:05:38.598 [INFO][5996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.601424 containerd[1982]: time="2025-01-29T12:05:38.600104964Z" level=info msg="TearDown network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" successfully" Jan 29 12:05:38.601424 containerd[1982]: time="2025-01-29T12:05:38.600136900Z" level=info msg="StopPodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" returns successfully" Jan 29 12:05:38.601424 containerd[1982]: time="2025-01-29T12:05:38.600898996Z" level=info msg="RemovePodSandbox for \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" Jan 29 12:05:38.601424 containerd[1982]: time="2025-01-29T12:05:38.600955778Z" level=info msg="Forcibly stopping sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\"" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.658 [WARNING][6021] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0", GenerateName:"calico-apiserver-87c886d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"627faa7d-aa48-4603-9ed6-cae093344773", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87c886d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"2f73711eb780d2661e26bd2226823de1420e262c685f414f82cbd995c74df64c", Pod:"calico-apiserver-87c886d86-tj5l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ec77b49e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.658 [INFO][6021] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.658 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" iface="eth0" netns="" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.658 [INFO][6021] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.658 [INFO][6021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.696 [INFO][6028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.696 [INFO][6028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.696 [INFO][6028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.703 [WARNING][6028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.704 [INFO][6028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" HandleID="k8s-pod-network.e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Workload="ip--172--31--23--23-k8s-calico--apiserver--87c886d86--tj5l2-eth0" Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.706 [INFO][6028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.711480 containerd[1982]: 2025-01-29 12:05:38.709 [INFO][6021] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f" Jan 29 12:05:38.712274 containerd[1982]: time="2025-01-29T12:05:38.711557866Z" level=info msg="TearDown network for sandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" successfully" Jan 29 12:05:38.720984 containerd[1982]: time="2025-01-29T12:05:38.720318245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:38.720984 containerd[1982]: time="2025-01-29T12:05:38.720407173Z" level=info msg="RemovePodSandbox \"e07f911cf514205f8b5b78a635b960b1e23eb6ecacafdaa4e665401bea331e9f\" returns successfully" Jan 29 12:05:38.722592 containerd[1982]: time="2025-01-29T12:05:38.722455857Z" level=info msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.780 [WARNING][6047] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea11a14-767a-481c-bdf8-160a5f9f8aed", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14", Pod:"csi-node-driver-b2lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3edb94721a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.781 [INFO][6047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.781 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" iface="eth0" netns="" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.781 [INFO][6047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.781 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.838 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.838 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.838 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.850 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.850 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.852 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:38.857900 containerd[1982]: 2025-01-29 12:05:38.855 [INFO][6047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:38.857900 containerd[1982]: time="2025-01-29T12:05:38.857278147Z" level=info msg="TearDown network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" successfully" Jan 29 12:05:38.857900 containerd[1982]: time="2025-01-29T12:05:38.857309491Z" level=info msg="StopPodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" returns successfully" Jan 29 12:05:38.859821 containerd[1982]: time="2025-01-29T12:05:38.858641738Z" level=info msg="RemovePodSandbox for \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" Jan 29 12:05:38.859821 containerd[1982]: time="2025-01-29T12:05:38.858677591Z" level=info msg="Forcibly stopping sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\"" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.931 [WARNING][6071] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea11a14-767a-481c-bdf8-160a5f9f8aed", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"5fc27de997afe604e9763a00fdbccda52e2c7edb87da2808dfb3c3e8bc14fc14", Pod:"csi-node-driver-b2lqs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3edb94721a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.934 [INFO][6071] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.935 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" iface="eth0" netns="" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.935 [INFO][6071] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.935 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.972 [INFO][6077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.972 [INFO][6077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.972 [INFO][6077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.985 [WARNING][6077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.985 [INFO][6077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" HandleID="k8s-pod-network.3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Workload="ip--172--31--23--23-k8s-csi--node--driver--b2lqs-eth0" Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.994 [INFO][6077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:39.003934 containerd[1982]: 2025-01-29 12:05:38.998 [INFO][6071] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313" Jan 29 12:05:39.003934 containerd[1982]: time="2025-01-29T12:05:39.003782708Z" level=info msg="TearDown network for sandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" successfully" Jan 29 12:05:39.013823 containerd[1982]: time="2025-01-29T12:05:39.013416434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:39.013823 containerd[1982]: time="2025-01-29T12:05:39.013520059Z" level=info msg="RemovePodSandbox \"3ed29a8ced6d992901f024a8583523e98f5481441f482819981f907d9b405313\" returns successfully" Jan 29 12:05:39.014383 containerd[1982]: time="2025-01-29T12:05:39.014275482Z" level=info msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.112 [WARNING][6095] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f", Pod:"coredns-668d6bf9bc-457dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia66cddfa2a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.114 [INFO][6095] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.114 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" iface="eth0" netns="" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.114 [INFO][6095] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.114 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.177 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.178 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.178 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.192 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.192 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.195 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:39.209174 containerd[1982]: 2025-01-29 12:05:39.202 [INFO][6095] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.209174 containerd[1982]: time="2025-01-29T12:05:39.208014058Z" level=info msg="TearDown network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" successfully" Jan 29 12:05:39.209174 containerd[1982]: time="2025-01-29T12:05:39.208043990Z" level=info msg="StopPodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" returns successfully" Jan 29 12:05:39.209174 containerd[1982]: time="2025-01-29T12:05:39.208706475Z" level=info msg="RemovePodSandbox for \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" Jan 29 12:05:39.209174 containerd[1982]: time="2025-01-29T12:05:39.208742085Z" level=info msg="Forcibly stopping sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\"" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.290 [WARNING][6120] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7cfb1c5f-4ca9-4bb5-93f1-2f4a39b22974", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-23", ContainerID:"638856db84a605f36574fb03647bdef28a2cc6cd1331ac08366817fc86f04d0f", Pod:"coredns-668d6bf9bc-457dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia66cddfa2a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.290 [INFO][6120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.290 [INFO][6120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" iface="eth0" netns="" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.290 [INFO][6120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.290 [INFO][6120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.326 [INFO][6126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.326 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.326 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.345 [WARNING][6126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.346 [INFO][6126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" HandleID="k8s-pod-network.5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Workload="ip--172--31--23--23-k8s-coredns--668d6bf9bc--457dw-eth0" Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.356 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:39.361786 containerd[1982]: 2025-01-29 12:05:39.359 [INFO][6120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f" Jan 29 12:05:39.363087 containerd[1982]: time="2025-01-29T12:05:39.362266841Z" level=info msg="TearDown network for sandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" successfully" Jan 29 12:05:39.369461 containerd[1982]: time="2025-01-29T12:05:39.369283987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:39.369461 containerd[1982]: time="2025-01-29T12:05:39.369363251Z" level=info msg="RemovePodSandbox \"5fac3b8cd7a39e5bec8a00459b4ab8ae33d2565287f57b4acc25896a8745205f\" returns successfully" Jan 29 12:05:40.931341 systemd[1]: Started sshd@9-172.31.23.23:22-139.178.68.195:52906.service - OpenSSH per-connection server daemon (139.178.68.195:52906). Jan 29 12:05:41.217281 sshd[6158]: Accepted publickey for core from 139.178.68.195 port 52906 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:41.234897 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:41.274088 systemd-logind[1964]: New session 10 of user core. Jan 29 12:05:41.285507 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:05:41.622047 sshd[6158]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:41.632616 systemd[1]: sshd@9-172.31.23.23:22-139.178.68.195:52906.service: Deactivated successfully. Jan 29 12:05:41.641007 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:05:41.641517 kubelet[3452]: I0129 12:05:41.641232 3452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:41.654425 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:05:41.686000 systemd[1]: Started sshd@10-172.31.23.23:22-139.178.68.195:52916.service - OpenSSH per-connection server daemon (139.178.68.195:52916). Jan 29 12:05:41.693661 systemd-logind[1964]: Removed session 10. Jan 29 12:05:41.751918 kubelet[3452]: I0129 12:05:41.751822 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b2lqs" podStartSLOduration=40.570027731 podStartE2EDuration="50.751799739s" podCreationTimestamp="2025-01-29 12:04:51 +0000 UTC" firstStartedPulling="2025-01-29 12:05:23.769848293 +0000 UTC m=+46.626444303" lastFinishedPulling="2025-01-29 12:05:33.951620299 +0000 UTC m=+56.808216311" observedRunningTime="2025-01-29 12:05:34.248971006 +0000 UTC m=+57.105567029" watchObservedRunningTime="2025-01-29 12:05:41.751799739 +0000 UTC m=+64.608395758" Jan 29 12:05:41.885448 sshd[6172]: Accepted publickey for core from 139.178.68.195 port 52916 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:41.888192 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:41.895067 systemd-logind[1964]: New session 11 of user core. Jan 29 12:05:41.900700 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:05:42.401561 sshd[6172]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:42.411863 systemd[1]: sshd@10-172.31.23.23:22-139.178.68.195:52916.service: Deactivated successfully. Jan 29 12:05:42.412699 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:05:42.421080 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:05:42.444971 systemd-logind[1964]: Removed session 11. Jan 29 12:05:42.454057 systemd[1]: Started sshd@11-172.31.23.23:22-139.178.68.195:52920.service - OpenSSH per-connection server daemon (139.178.68.195:52920). Jan 29 12:05:42.672811 sshd[6185]: Accepted publickey for core from 139.178.68.195 port 52920 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:42.675090 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:42.681404 systemd-logind[1964]: New session 12 of user core. Jan 29 12:05:42.688246 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:05:42.996455 sshd[6185]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:43.002609 systemd[1]: sshd@11-172.31.23.23:22-139.178.68.195:52920.service: Deactivated successfully. Jan 29 12:05:43.006657 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:05:43.007811 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:05:43.009360 systemd-logind[1964]: Removed session 12. Jan 29 12:05:48.092551 systemd[1]: Started sshd@12-172.31.23.23:22-139.178.68.195:34390.service - OpenSSH per-connection server daemon (139.178.68.195:34390). Jan 29 12:05:48.313716 sshd[6203]: Accepted publickey for core from 139.178.68.195 port 34390 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:48.316083 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:48.323335 systemd-logind[1964]: New session 13 of user core. Jan 29 12:05:48.330742 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:05:48.777882 sshd[6203]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:48.785379 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:05:48.787899 systemd[1]: sshd@12-172.31.23.23:22-139.178.68.195:34390.service: Deactivated successfully. Jan 29 12:05:48.795617 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:05:48.799948 systemd-logind[1964]: Removed session 13. Jan 29 12:05:53.815971 systemd[1]: Started sshd@13-172.31.23.23:22-139.178.68.195:34400.service - OpenSSH per-connection server daemon (139.178.68.195:34400). Jan 29 12:05:54.058241 sshd[6244]: Accepted publickey for core from 139.178.68.195 port 34400 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:05:54.062191 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:54.068558 systemd-logind[1964]: New session 14 of user core. Jan 29 12:05:54.076824 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:05:54.802531 sshd[6244]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:54.809478 systemd[1]: sshd@13-172.31.23.23:22-139.178.68.195:34400.service: Deactivated successfully. Jan 29 12:05:54.813600 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:05:54.814990 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:05:54.816874 systemd-logind[1964]: Removed session 14. Jan 29 12:05:59.838749 systemd[1]: Started sshd@14-172.31.23.23:22-139.178.68.195:52352.service - OpenSSH per-connection server daemon (139.178.68.195:52352). Jan 29 12:06:00.022366 sshd[6256]: Accepted publickey for core from 139.178.68.195 port 52352 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:00.023531 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:00.047759 systemd-logind[1964]: New session 15 of user core. Jan 29 12:06:00.071977 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:06:00.723863 sshd[6256]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:00.742338 systemd[1]: sshd@14-172.31.23.23:22-139.178.68.195:52352.service: Deactivated successfully. Jan 29 12:06:00.749025 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:06:00.750829 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:06:00.756014 systemd-logind[1964]: Removed session 15. Jan 29 12:06:05.779511 systemd[1]: Started sshd@15-172.31.23.23:22-139.178.68.195:51294.service - OpenSSH per-connection server daemon (139.178.68.195:51294). Jan 29 12:06:06.077618 sshd[6295]: Accepted publickey for core from 139.178.68.195 port 51294 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:06.080858 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:06.088916 systemd-logind[1964]: New session 16 of user core. Jan 29 12:06:06.099822 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:06:06.633560 sshd[6295]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:06.680301 systemd[1]: sshd@15-172.31.23.23:22-139.178.68.195:51294.service: Deactivated successfully. Jan 29 12:06:06.684271 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:06:06.687338 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:06:06.695040 systemd[1]: Started sshd@16-172.31.23.23:22-139.178.68.195:51310.service - OpenSSH per-connection server daemon (139.178.68.195:51310). Jan 29 12:06:06.697579 systemd-logind[1964]: Removed session 16. Jan 29 12:06:06.874528 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 51310 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:06.879426 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:06.896527 systemd-logind[1964]: New session 17 of user core. Jan 29 12:06:06.903874 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:06:07.749921 sshd[6308]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:07.759793 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:06:07.761348 systemd[1]: sshd@16-172.31.23.23:22-139.178.68.195:51310.service: Deactivated successfully. Jan 29 12:06:07.765736 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:06:07.769343 systemd-logind[1964]: Removed session 17. Jan 29 12:06:07.792980 systemd[1]: Started sshd@17-172.31.23.23:22-139.178.68.195:51322.service - OpenSSH per-connection server daemon (139.178.68.195:51322). Jan 29 12:06:08.021944 sshd[6319]: Accepted publickey for core from 139.178.68.195 port 51322 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:08.032657 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:08.044525 systemd-logind[1964]: New session 18 of user core. Jan 29 12:06:08.049772 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:06:09.569897 sshd[6319]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:09.595353 systemd[1]: sshd@17-172.31.23.23:22-139.178.68.195:51322.service: Deactivated successfully. Jan 29 12:06:09.602773 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:06:09.608039 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:06:09.619952 systemd[1]: Started sshd@18-172.31.23.23:22-139.178.68.195:51324.service - OpenSSH per-connection server daemon (139.178.68.195:51324). Jan 29 12:06:09.633755 systemd-logind[1964]: Removed session 18. Jan 29 12:06:09.855870 sshd[6336]: Accepted publickey for core from 139.178.68.195 port 51324 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:09.857743 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:09.865597 systemd-logind[1964]: New session 19 of user core. Jan 29 12:06:09.869719 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:06:11.332762 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:11.340667 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:06:11.341168 systemd[1]: sshd@18-172.31.23.23:22-139.178.68.195:51324.service: Deactivated successfully. Jan 29 12:06:11.345447 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:06:11.348032 systemd-logind[1964]: Removed session 19. Jan 29 12:06:11.366235 systemd[1]: Started sshd@19-172.31.23.23:22-139.178.68.195:51326.service - OpenSSH per-connection server daemon (139.178.68.195:51326). Jan 29 12:06:11.580695 sshd[6348]: Accepted publickey for core from 139.178.68.195 port 51326 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:11.585480 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:11.592596 systemd-logind[1964]: New session 20 of user core. Jan 29 12:06:11.599726 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:06:11.825574 sshd[6348]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:11.833165 systemd[1]: sshd@19-172.31.23.23:22-139.178.68.195:51326.service: Deactivated successfully. Jan 29 12:06:11.836384 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:06:11.839131 systemd-logind[1964]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:06:11.840655 systemd-logind[1964]: Removed session 20. Jan 29 12:06:16.869949 systemd[1]: Started sshd@20-172.31.23.23:22-139.178.68.195:45886.service - OpenSSH per-connection server daemon (139.178.68.195:45886). Jan 29 12:06:17.091078 sshd[6365]: Accepted publickey for core from 139.178.68.195 port 45886 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:17.093149 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:17.107977 systemd-logind[1964]: New session 21 of user core. Jan 29 12:06:17.113735 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:06:17.491341 sshd[6365]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:17.509695 systemd-logind[1964]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:06:17.510393 systemd[1]: sshd@20-172.31.23.23:22-139.178.68.195:45886.service: Deactivated successfully. Jan 29 12:06:17.513437 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:06:17.514768 systemd-logind[1964]: Removed session 21. Jan 29 12:06:22.529335 systemd[1]: Started sshd@21-172.31.23.23:22-139.178.68.195:45894.service - OpenSSH per-connection server daemon (139.178.68.195:45894). Jan 29 12:06:22.747940 sshd[6400]: Accepted publickey for core from 139.178.68.195 port 45894 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:22.756754 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:22.770394 systemd-logind[1964]: New session 22 of user core. Jan 29 12:06:22.776767 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:06:23.887215 sshd[6400]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:23.903679 systemd-logind[1964]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:06:23.905366 systemd[1]: sshd@21-172.31.23.23:22-139.178.68.195:45894.service: Deactivated successfully. Jan 29 12:06:23.910829 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:06:23.914727 systemd-logind[1964]: Removed session 22. Jan 29 12:06:28.930613 systemd[1]: Started sshd@22-172.31.23.23:22-139.178.68.195:44628.service - OpenSSH per-connection server daemon (139.178.68.195:44628). Jan 29 12:06:29.210844 sshd[6412]: Accepted publickey for core from 139.178.68.195 port 44628 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:29.212857 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:29.226617 systemd-logind[1964]: New session 23 of user core. Jan 29 12:06:29.232346 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:06:29.867279 sshd[6412]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:29.872272 systemd[1]: sshd@22-172.31.23.23:22-139.178.68.195:44628.service: Deactivated successfully. Jan 29 12:06:29.876039 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:06:29.878283 systemd-logind[1964]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:06:29.882726 systemd-logind[1964]: Removed session 23. Jan 29 12:06:30.294934 systemd[1]: run-containerd-runc-k8s.io-47cb3a34a0c9356bf3baa156a3b4aa48deca17f9e2ff74cdbc33d2eb5195f6b6-runc.FOLVoR.mount: Deactivated successfully. Jan 29 12:06:34.909875 systemd[1]: Started sshd@23-172.31.23.23:22-139.178.68.195:47222.service - OpenSSH per-connection server daemon (139.178.68.195:47222). Jan 29 12:06:35.131870 sshd[6443]: Accepted publickey for core from 139.178.68.195 port 47222 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:35.151438 sshd[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:35.174751 systemd-logind[1964]: New session 24 of user core. Jan 29 12:06:35.182917 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:06:35.436171 sshd[6443]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:35.440281 systemd[1]: sshd@23-172.31.23.23:22-139.178.68.195:47222.service: Deactivated successfully. Jan 29 12:06:35.443214 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:06:35.446215 systemd-logind[1964]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:06:35.447386 systemd-logind[1964]: Removed session 24. Jan 29 12:06:40.470866 systemd[1]: Started sshd@24-172.31.23.23:22-139.178.68.195:47238.service - OpenSSH per-connection server daemon (139.178.68.195:47238). Jan 29 12:06:40.722023 sshd[6457]: Accepted publickey for core from 139.178.68.195 port 47238 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:40.726812 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:40.741396 systemd-logind[1964]: New session 25 of user core. Jan 29 12:06:40.747123 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:06:41.128771 sshd[6457]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:41.136193 systemd[1]: sshd@24-172.31.23.23:22-139.178.68.195:47238.service: Deactivated successfully. Jan 29 12:06:41.137043 systemd-logind[1964]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:06:41.146353 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:06:41.150570 systemd-logind[1964]: Removed session 25. Jan 29 12:06:46.168999 systemd[1]: Started sshd@25-172.31.23.23:22-139.178.68.195:47862.service - OpenSSH per-connection server daemon (139.178.68.195:47862). Jan 29 12:06:46.383237 sshd[6500]: Accepted publickey for core from 139.178.68.195 port 47862 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:06:46.385846 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:46.415770 systemd-logind[1964]: New session 26 of user core. Jan 29 12:06:46.424886 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:06:46.725595 sshd[6500]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:46.731260 systemd[1]: sshd@25-172.31.23.23:22-139.178.68.195:47862.service: Deactivated successfully. Jan 29 12:06:46.733675 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:06:46.744223 systemd-logind[1964]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:06:46.749346 systemd-logind[1964]: Removed session 26. Jan 29 12:07:01.792383 systemd[1]: cri-containerd-baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126.scope: Deactivated successfully. Jan 29 12:07:01.793512 systemd[1]: cri-containerd-baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126.scope: Consumed 3.656s CPU time. Jan 29 12:07:02.177617 systemd[1]: cri-containerd-234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de.scope: Deactivated successfully. Jan 29 12:07:02.177932 systemd[1]: cri-containerd-234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de.scope: Consumed 3.319s CPU time, 24.6M memory peak, 0B memory swap peak. Jan 29 12:07:02.448325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126-rootfs.mount: Deactivated successfully. Jan 29 12:07:02.469763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de-rootfs.mount: Deactivated successfully. Jan 29 12:07:02.570924 containerd[1982]: time="2025-01-29T12:07:02.520694582Z" level=info msg="shim disconnected" id=baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126 namespace=k8s.io Jan 29 12:07:02.571771 containerd[1982]: time="2025-01-29T12:07:02.507529222Z" level=info msg="shim disconnected" id=234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de namespace=k8s.io Jan 29 12:07:02.584418 containerd[1982]: time="2025-01-29T12:07:02.583649348Z" level=warning msg="cleaning up after shim disconnected" id=234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de namespace=k8s.io Jan 29 12:07:02.584418 containerd[1982]: time="2025-01-29T12:07:02.583700127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:02.586795 containerd[1982]: time="2025-01-29T12:07:02.586740899Z" level=warning msg="cleaning up after shim disconnected" id=baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126 namespace=k8s.io Jan 29 12:07:02.586795 containerd[1982]: time="2025-01-29T12:07:02.586785376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:02.877069 containerd[1982]: time="2025-01-29T12:07:02.876999615Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:07:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:07:03.074909 kubelet[3452]: I0129 12:07:03.074764 3452 scope.go:117] "RemoveContainer" containerID="baeb1c2e825af8814a462c91fb6306607d43186c12936ba2ee86000e8e3d3126" Jan 29 12:07:03.080796 kubelet[3452]: I0129 12:07:03.080617 3452 scope.go:117] "RemoveContainer" containerID="234cd70772b3736f41fffd50277ca99e75d30d28570ff99924e8a9885e03c6de" Jan 29 12:07:03.095071 containerd[1982]: time="2025-01-29T12:07:03.094913509Z" level=info msg="CreateContainer within sandbox \"0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 12:07:03.101264 containerd[1982]: time="2025-01-29T12:07:03.100816901Z" level=info msg="CreateContainer within sandbox \"d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 29 12:07:03.231864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907508816.mount: Deactivated successfully. Jan 29 12:07:03.273228 containerd[1982]: time="2025-01-29T12:07:03.273174132Z" level=info msg="CreateContainer within sandbox \"d9f41ee76f06550e4716c68fe8eb5214da2b87bf0353da663e606603623e3570\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"78799ce7c8ed6cb18f2c79227a9f58be2c784c9d7318c17a1c978629c0e4b3f2\"" Jan 29 12:07:03.274770 containerd[1982]: time="2025-01-29T12:07:03.274435349Z" level=info msg="StartContainer for \"78799ce7c8ed6cb18f2c79227a9f58be2c784c9d7318c17a1c978629c0e4b3f2\"" Jan 29 12:07:03.276876 containerd[1982]: time="2025-01-29T12:07:03.276692982Z" level=info msg="CreateContainer within sandbox \"0651672b0a2e58c9ecdce457b8bcd0c8aacb090fca3bddfe4205dd5866440bfa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e8d0b7652660c922a8fddde49ca57c29975ca9810b3c32f1597d4a2cb24b2117\"" Jan 29 12:07:03.277799 containerd[1982]: time="2025-01-29T12:07:03.277767339Z" level=info msg="StartContainer for \"e8d0b7652660c922a8fddde49ca57c29975ca9810b3c32f1597d4a2cb24b2117\"" Jan 29 12:07:03.346833 systemd[1]: Started cri-containerd-78799ce7c8ed6cb18f2c79227a9f58be2c784c9d7318c17a1c978629c0e4b3f2.scope - libcontainer container 78799ce7c8ed6cb18f2c79227a9f58be2c784c9d7318c17a1c978629c0e4b3f2. Jan 29 12:07:03.357951 systemd[1]: Started cri-containerd-e8d0b7652660c922a8fddde49ca57c29975ca9810b3c32f1597d4a2cb24b2117.scope - libcontainer container e8d0b7652660c922a8fddde49ca57c29975ca9810b3c32f1597d4a2cb24b2117. Jan 29 12:07:03.428593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903160389.mount: Deactivated successfully. Jan 29 12:07:03.459419 containerd[1982]: time="2025-01-29T12:07:03.458578123Z" level=info msg="StartContainer for \"78799ce7c8ed6cb18f2c79227a9f58be2c784c9d7318c17a1c978629c0e4b3f2\" returns successfully" Jan 29 12:07:03.492174 containerd[1982]: time="2025-01-29T12:07:03.492078796Z" level=info msg="StartContainer for \"e8d0b7652660c922a8fddde49ca57c29975ca9810b3c32f1597d4a2cb24b2117\" returns successfully" Jan 29 12:07:07.592813 systemd[1]: cri-containerd-381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b.scope: Deactivated successfully. Jan 29 12:07:07.593400 systemd[1]: cri-containerd-381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b.scope: Consumed 2.461s CPU time, 21.1M memory peak, 0B memory swap peak. Jan 29 12:07:07.672227 containerd[1982]: time="2025-01-29T12:07:07.672026175Z" level=info msg="shim disconnected" id=381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b namespace=k8s.io Jan 29 12:07:07.672227 containerd[1982]: time="2025-01-29T12:07:07.672117129Z" level=warning msg="cleaning up after shim disconnected" id=381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b namespace=k8s.io Jan 29 12:07:07.672227 containerd[1982]: time="2025-01-29T12:07:07.672131505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:07.676845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b-rootfs.mount: Deactivated successfully. Jan 29 12:07:08.070959 kubelet[3452]: I0129 12:07:08.070918 3452 scope.go:117] "RemoveContainer" containerID="381aa5e3947cb9541a56d609a82336c48a22f726eea93257aec7f56437147e7b" Jan 29 12:07:08.076145 containerd[1982]: time="2025-01-29T12:07:08.076098684Z" level=info msg="CreateContainer within sandbox \"d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 12:07:08.103201 containerd[1982]: time="2025-01-29T12:07:08.102386600Z" level=info msg="CreateContainer within sandbox \"d7d9b6b85871615f5992776c1d89087c7947797c7d255c3755ce9a40432d30ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"16436c22fa61375a22e1775b72e91853fdd2c6604daf7aea1d498c2d2202ba97\"" Jan 29 12:07:08.107393 containerd[1982]: time="2025-01-29T12:07:08.107307468Z" level=info msg="StartContainer for \"16436c22fa61375a22e1775b72e91853fdd2c6604daf7aea1d498c2d2202ba97\"" Jan 29 12:07:08.109331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383634703.mount: Deactivated successfully. Jan 29 12:07:08.170260 systemd[1]: Started cri-containerd-16436c22fa61375a22e1775b72e91853fdd2c6604daf7aea1d498c2d2202ba97.scope - libcontainer container 16436c22fa61375a22e1775b72e91853fdd2c6604daf7aea1d498c2d2202ba97. Jan 29 12:07:08.248939 containerd[1982]: time="2025-01-29T12:07:08.248888811Z" level=info msg="StartContainer for \"16436c22fa61375a22e1775b72e91853fdd2c6604daf7aea1d498c2d2202ba97\" returns successfully" Jan 29 12:07:09.880208 kubelet[3452]: E0129 12:07:09.880138 3452 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 12:07:19.880966 kubelet[3452]: E0129 12:07:19.880853 3452 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-23?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"