Nov 12 20:49:40.957984 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:49:40.958028 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:49:40.958046 kernel: BIOS-provided physical RAM map: Nov 12 20:49:40.958059 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:49:40.958071 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:49:40.958084 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:49:40.958102 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Nov 12 20:49:40.958116 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Nov 12 20:49:40.958173 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Nov 12 20:49:40.958187 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:49:40.958201 kernel: NX (Execute Disable) protection: active Nov 12 20:49:40.958214 kernel: APIC: Static calls initialized Nov 12 20:49:40.958228 kernel: SMBIOS 2.7 present. Nov 12 20:49:40.958398 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 12 20:49:40.958425 kernel: Hypervisor detected: KVM Nov 12 20:49:40.958441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:49:40.958456 kernel: kvm-clock: using sched offset of 6324821005 cycles Nov 12 20:49:40.958471 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:49:40.958487 kernel: tsc: Detected 2499.996 MHz processor Nov 12 20:49:40.958502 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:49:40.958517 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:49:40.958536 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Nov 12 20:49:40.958552 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:49:40.958571 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:49:40.958586 kernel: Using GB pages for direct mapping Nov 12 20:49:40.958621 kernel: ACPI: Early table checksum verification disabled Nov 12 20:49:40.958635 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Nov 12 20:49:40.958646 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Nov 12 20:49:40.958656 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 12 20:49:40.958667 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 12 20:49:40.958684 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Nov 12 20:49:40.958698 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 12 20:49:40.958711 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 12 20:49:40.958723 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 12 20:49:40.958735 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 12 20:49:40.958748 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 12 20:49:40.958760 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 12 20:49:40.958773 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 12 20:49:40.958786 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Nov 12 20:49:40.958803 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Nov 12 20:49:40.958822 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Nov 12 20:49:40.958838 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Nov 12 20:49:40.958854 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Nov 12 20:49:40.958871 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Nov 12 20:49:40.958889 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Nov 12 20:49:40.958905 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Nov 12 20:49:40.958921 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Nov 12 20:49:40.958995 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Nov 12 20:49:40.959011 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:49:40.959024 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:49:40.959037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 12 20:49:40.959050 kernel: NUMA: Initialized distance table, cnt=1 Nov 12 20:49:40.959064 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Nov 12 20:49:40.959138 kernel: Zone ranges: Nov 12 20:49:40.959156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:49:40.959177 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Nov 12 20:49:40.959194 kernel: Normal empty Nov 12 20:49:40.959209 kernel: Movable zone start for each node Nov 12 20:49:40.959226 kernel: Early memory node ranges Nov 12 20:49:40.959242 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:49:40.959258 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Nov 12 20:49:40.959275 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Nov 12 20:49:40.959295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:49:40.959313 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:49:40.959329 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Nov 12 20:49:40.959346 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:49:40.959361 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:49:40.959374 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 12 20:49:40.959388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:49:40.959404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:49:40.959420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:49:40.959434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:49:40.959450 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:49:40.959463 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:49:40.959478 kernel: TSC deadline timer available Nov 12 20:49:40.959492 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:49:40.959507 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:49:40.959521 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Nov 12 20:49:40.959534 kernel: Booting paravirtualized kernel on KVM Nov 12 20:49:40.959547 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:49:40.959560 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:49:40.959577 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:49:40.959590 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:49:40.959619 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:49:40.959634 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:49:40.959648 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:49:40.959663 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:49:40.959677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:49:40.959690 kernel: random: crng init done Nov 12 20:49:40.959706 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:49:40.959720 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:49:40.959734 kernel: Fallback order for Node 0: 0 Nov 12 20:49:40.959747 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Nov 12 20:49:40.959761 kernel: Policy zone: DMA32 Nov 12 20:49:40.959774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:49:40.959788 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125152K reserved, 0K cma-reserved) Nov 12 20:49:40.959801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:49:40.959817 kernel: Kernel/User page tables isolation: enabled Nov 12 20:49:40.959830 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:49:40.959843 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:49:40.959857 kernel: Dynamic Preempt: voluntary Nov 12 20:49:40.959871 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:49:40.959885 kernel: rcu: RCU event tracing is enabled. Nov 12 20:49:40.959898 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:49:40.959912 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:49:40.959925 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:49:40.959938 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:49:40.959955 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:49:40.959969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:49:40.959982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:49:40.959995 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:49:40.960008 kernel: Console: colour VGA+ 80x25 Nov 12 20:49:40.960021 kernel: printk: console [ttyS0] enabled Nov 12 20:49:40.960035 kernel: ACPI: Core revision 20230628 Nov 12 20:49:40.960048 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 12 20:49:40.960062 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:49:40.960077 kernel: x2apic enabled Nov 12 20:49:40.960091 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:49:40.960116 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 12 20:49:40.960134 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 12 20:49:40.960148 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:49:40.960162 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:49:40.960177 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:49:40.960190 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:49:40.960204 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:49:40.960218 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:49:40.960233 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:49:40.960247 kernel: RETBleed: Vulnerable Nov 12 20:49:40.960268 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:49:40.960283 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:49:40.960299 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:49:40.960316 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:49:40.960333 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:49:40.960350 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:49:40.960367 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:49:40.960458 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 12 20:49:40.960479 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 12 20:49:40.960496 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:49:40.960513 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:49:40.960531 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:49:40.960547 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 12 20:49:40.960564 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:49:40.960581 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 12 20:49:40.960598 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 12 20:49:40.960627 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 12 20:49:40.960648 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 12 20:49:40.960664 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 12 20:49:40.960681 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 12 20:49:40.960698 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 12 20:49:40.960716 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:49:40.960733 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:49:40.960750 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:49:40.960768 kernel: landlock: Up and running. Nov 12 20:49:40.960785 kernel: SELinux: Initializing. Nov 12 20:49:40.960802 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:49:40.960819 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:49:40.960836 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:49:40.960858 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:49:40.960875 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:49:40.960893 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:49:40.960911 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:49:40.960928 kernel: signal: max sigframe size: 3632 Nov 12 20:49:40.960944 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:49:40.960963 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:49:40.960980 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:49:40.960997 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:49:40.961017 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:49:40.961034 kernel: .... node #0, CPUs: #1 Nov 12 20:49:40.961052 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:49:40.961071 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:49:40.961089 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:49:40.961106 kernel: smpboot: Max logical packages: 1 Nov 12 20:49:40.961124 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 12 20:49:40.961141 kernel: devtmpfs: initialized Nov 12 20:49:40.961162 kernel: x86/mm: Memory block size: 128MB Nov 12 20:49:40.961180 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:49:40.961198 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:49:40.961215 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:49:40.961233 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:49:40.961251 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:49:40.961268 kernel: audit: type=2000 audit(1731444579.763:1): state=initialized audit_enabled=0 res=1 Nov 12 20:49:40.961285 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:49:40.961301 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:49:40.961322 kernel: cpuidle: using governor menu Nov 12 20:49:40.961339 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:49:40.961357 kernel: dca service started, version 1.12.1 Nov 12 20:49:40.961374 kernel: PCI: Using configuration type 1 for base access Nov 12 20:49:40.961391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:49:40.961409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:49:40.961427 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:49:40.961445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:49:40.961461 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:49:40.961482 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:49:40.961496 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:49:40.961511 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:49:40.961529 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:49:40.961546 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:49:40.961563 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:49:40.961581 kernel: ACPI: Interpreter enabled Nov 12 20:49:40.961597 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:49:40.961633 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:49:40.961654 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:49:40.961672 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:49:40.961690 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:49:40.961706 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:49:40.961951 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:49:40.962215 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:49:40.962398 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:49:40.962419 kernel: acpiphp: Slot [3] registered Nov 12 20:49:40.962442 kernel: acpiphp: Slot [4] registered Nov 12 20:49:40.962459 kernel: acpiphp: Slot [5] registered Nov 12 20:49:40.962475 kernel: acpiphp: Slot [6] registered Nov 12 20:49:40.962492 kernel: acpiphp: Slot [7] registered Nov 12 20:49:40.962509 kernel: acpiphp: Slot [8] registered Nov 12 20:49:40.962526 kernel: acpiphp: Slot [9] registered Nov 12 20:49:40.962543 kernel: acpiphp: Slot [10] registered Nov 12 20:49:40.962559 kernel: acpiphp: Slot [11] registered Nov 12 20:49:40.962576 kernel: acpiphp: Slot [12] registered Nov 12 20:49:40.962596 kernel: acpiphp: Slot [13] registered Nov 12 20:49:40.962632 kernel: acpiphp: Slot [14] registered Nov 12 20:49:40.962649 kernel: acpiphp: Slot [15] registered Nov 12 20:49:40.962666 kernel: acpiphp: Slot [16] registered Nov 12 20:49:40.962682 kernel: acpiphp: Slot [17] registered Nov 12 20:49:40.962698 kernel: acpiphp: Slot [18] registered Nov 12 20:49:40.962715 kernel: acpiphp: Slot [19] registered Nov 12 20:49:40.962731 kernel: acpiphp: Slot [20] registered Nov 12 20:49:40.962747 kernel: acpiphp: Slot [21] registered Nov 12 20:49:40.962768 kernel: acpiphp: Slot [22] registered Nov 12 20:49:40.962784 kernel: acpiphp: Slot [23] registered Nov 12 20:49:40.962801 kernel: acpiphp: Slot [24] registered Nov 12 20:49:40.962817 kernel: acpiphp: Slot [25] registered Nov 12 20:49:40.962833 kernel: acpiphp: Slot [26] registered Nov 12 20:49:40.962850 kernel: acpiphp: Slot [27] registered Nov 12 20:49:40.962866 kernel: acpiphp: Slot [28] registered Nov 12 20:49:40.962883 kernel: acpiphp: Slot [29] registered Nov 12 20:49:40.962899 kernel: acpiphp: Slot [30] registered Nov 12 20:49:40.962916 kernel: acpiphp: Slot [31] registered Nov 12 20:49:40.963006 kernel: PCI host bridge to bus 0000:00 Nov 12 20:49:40.963322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:49:40.963458 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:49:40.963584 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:49:40.963854 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:49:40.963981 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:49:40.964208 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:49:40.964574 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:49:40.964748 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 12 20:49:40.964889 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:49:40.965076 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Nov 12 20:49:40.965218 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 12 20:49:40.966033 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 12 20:49:40.966191 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 12 20:49:40.966344 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 12 20:49:40.966591 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 12 20:49:40.966807 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 12 20:49:40.967035 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 12 20:49:40.967253 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Nov 12 20:49:40.967428 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:49:40.967861 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:49:40.968037 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 12 20:49:40.968169 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Nov 12 20:49:40.969007 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 12 20:49:40.969185 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Nov 12 20:49:40.969208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:49:40.969224 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:49:40.969245 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:49:40.969260 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:49:40.969275 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:49:40.969290 kernel: iommu: Default domain type: Translated Nov 12 20:49:40.969305 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:49:40.969320 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:49:40.969335 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:49:40.969352 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:49:40.969367 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Nov 12 20:49:40.969516 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 12 20:49:40.969672 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 12 20:49:40.969811 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:49:40.969831 kernel: vgaarb: loaded Nov 12 20:49:40.969845 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 12 20:49:40.969859 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 12 20:49:40.969873 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:49:40.969888 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:49:40.969902 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:49:40.969920 kernel: pnp: PnP ACPI init Nov 12 20:49:40.969934 kernel: pnp: PnP ACPI: found 5 devices Nov 12 20:49:40.969951 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:49:40.969966 kernel: NET: Registered PF_INET protocol family Nov 12 20:49:40.969982 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:49:40.969998 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:49:40.970013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:49:40.970029 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:49:40.970049 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:49:40.970066 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:49:40.970083 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:49:40.970101 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:49:40.970117 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:49:40.970134 kernel: NET: Registered PF_XDP protocol family Nov 12 20:49:40.970277 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:49:40.970407 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:49:40.970531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:49:40.971942 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:49:40.972104 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:49:40.972129 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:49:40.972146 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:49:40.972163 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 12 20:49:40.972180 kernel: clocksource: Switched to clocksource tsc Nov 12 20:49:40.972196 kernel: Initialise system trusted keyrings Nov 12 20:49:40.972212 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:49:40.972235 kernel: Key type asymmetric registered Nov 12 20:49:40.972251 kernel: Asymmetric key parser 'x509' registered Nov 12 20:49:40.972267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:49:40.972283 kernel: io scheduler mq-deadline registered Nov 12 20:49:40.972299 kernel: io scheduler kyber registered Nov 12 20:49:40.972315 kernel: io scheduler bfq registered Nov 12 20:49:40.972331 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:49:40.972345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:49:40.972362 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:49:40.972382 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:49:40.972469 kernel: i8042: Warning: Keylock active Nov 12 20:49:40.972486 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:49:40.972500 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:49:40.972682 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:49:40.972809 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:49:40.972930 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:49:40 UTC (1731444580) Nov 12 20:49:40.973166 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:49:40.973193 kernel: intel_pstate: CPU model not supported Nov 12 20:49:40.973212 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:49:40.973229 kernel: Segment Routing with IPv6 Nov 12 20:49:40.973246 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:49:40.973263 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:49:40.973280 kernel: Key type dns_resolver registered Nov 12 20:49:40.973297 kernel: IPI shorthand broadcast: enabled Nov 12 20:49:40.973313 kernel: sched_clock: Marking stable (711002459, 274644288)->(1129373305, -143726558) Nov 12 20:49:40.973329 kernel: registered taskstats version 1 Nov 12 20:49:40.973349 kernel: Loading compiled-in X.509 certificates Nov 12 20:49:40.973365 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:49:40.973381 kernel: Key type .fscrypt registered Nov 12 20:49:40.973396 kernel: Key type fscrypt-provisioning registered Nov 12 20:49:40.973412 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:49:40.973428 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:49:40.973444 kernel: ima: No architecture policies found Nov 12 20:49:40.973459 kernel: clk: Disabling unused clocks Nov 12 20:49:40.973478 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:49:40.973590 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:49:40.973626 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:49:40.973642 kernel: Run /init as init process Nov 12 20:49:40.973657 kernel: with arguments: Nov 12 20:49:40.973672 kernel: /init Nov 12 20:49:40.973687 kernel: with environment: Nov 12 20:49:40.973701 kernel: HOME=/ Nov 12 20:49:40.973716 kernel: TERM=linux Nov 12 20:49:40.973731 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:49:40.973831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:49:40.973866 systemd[1]: Detected virtualization amazon. Nov 12 20:49:40.973886 systemd[1]: Detected architecture x86-64. Nov 12 20:49:40.973902 systemd[1]: Running in initrd. Nov 12 20:49:40.973918 systemd[1]: No hostname configured, using default hostname. Nov 12 20:49:40.973937 systemd[1]: Hostname set to . Nov 12 20:49:40.973954 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:49:40.973971 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:49:40.973988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:49:40.974005 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:49:40.974023 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:49:40.974040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:49:40.974060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:49:40.974077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:49:40.974096 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:49:40.974114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:49:40.974131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:49:40.974147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:49:40.974164 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:49:40.974184 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:49:40.974202 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:49:40.974219 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:49:40.974235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:49:40.974253 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:49:40.974270 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:49:40.974287 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:49:40.974304 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:49:40.974321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:49:40.974340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:49:40.974358 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:49:40.974374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:49:40.974392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:49:40.974408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:49:40.974425 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:49:40.974442 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:49:40.974462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:49:40.974480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:49:40.974528 systemd-journald[178]: Collecting audit messages is disabled. Nov 12 20:49:40.974568 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:49:40.974587 systemd-journald[178]: Journal started Nov 12 20:49:40.974631 systemd-journald[178]: Runtime Journal (/run/log/journal/ec26f03f24902bcbabb9ae4241c65413) is 4.8M, max 38.6M, 33.7M free. Nov 12 20:49:40.978755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:49:40.979240 systemd-modules-load[179]: Inserted module 'overlay' Nov 12 20:49:40.983429 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:49:40.991181 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:49:41.005706 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:49:41.007941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:49:41.014809 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:49:41.038635 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:49:41.041428 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 12 20:49:41.184114 kernel: Bridge firewalling registered Nov 12 20:49:41.042813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:49:41.193701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:49:41.206144 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:49:41.226321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:49:41.228687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:49:41.229364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:49:41.250924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:49:41.256013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:49:41.265938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:49:41.275220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:49:41.290896 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:49:41.311857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:49:41.336440 dracut-cmdline[214]: dracut-dracut-053 Nov 12 20:49:41.343060 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:49:41.356503 systemd-resolved[206]: Positive Trust Anchors: Nov 12 20:49:41.356532 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:49:41.356580 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:49:41.373736 systemd-resolved[206]: Defaulting to hostname 'linux'. Nov 12 20:49:41.376301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:49:41.377635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:49:41.465667 kernel: SCSI subsystem initialized Nov 12 20:49:41.477669 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:49:41.495683 kernel: iscsi: registered transport (tcp) Nov 12 20:49:41.522633 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:49:41.522716 kernel: QLogic iSCSI HBA Driver Nov 12 20:49:41.586150 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:49:41.596832 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:49:41.628079 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:49:41.628160 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:49:41.628183 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:49:41.675684 kernel: raid6: avx512x4 gen() 13959 MB/s Nov 12 20:49:41.692637 kernel: raid6: avx512x2 gen() 13488 MB/s Nov 12 20:49:41.709651 kernel: raid6: avx512x1 gen() 15076 MB/s Nov 12 20:49:41.726663 kernel: raid6: avx2x4 gen() 11840 MB/s Nov 12 20:49:41.744259 kernel: raid6: avx2x2 gen() 11460 MB/s Nov 12 20:49:41.760889 kernel: raid6: avx2x1 gen() 10160 MB/s Nov 12 20:49:41.761085 kernel: raid6: using algorithm avx512x1 gen() 15076 MB/s Nov 12 20:49:41.779712 kernel: raid6: .... xor() 16574 MB/s, rmw enabled Nov 12 20:49:41.779812 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:49:41.807637 kernel: xor: automatically using best checksumming function avx Nov 12 20:49:42.059633 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:49:42.078737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:49:42.084862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:49:42.143246 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 12 20:49:42.160104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:49:42.175317 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:49:42.210916 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Nov 12 20:49:42.299393 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:49:42.305820 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:49:42.384866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:49:42.398970 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:49:42.444301 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:49:42.449224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:49:42.453237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:49:42.456491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:49:42.470370 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:49:42.518330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:49:42.552318 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 12 20:49:42.567938 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 12 20:49:42.568200 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:49:42.568224 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 12 20:49:42.568391 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a0:e9:5c:ce:2d Nov 12 20:49:42.571406 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:49:42.588037 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:49:42.588116 kernel: AES CTR mode by8 optimization enabled Nov 12 20:49:42.592334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:49:42.594913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:49:42.599946 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:49:42.604535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:49:42.606450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:49:42.610452 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:49:42.622118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:49:42.629150 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 12 20:49:42.629423 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:49:42.647631 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 12 20:49:42.660981 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:49:42.661064 kernel: GPT:9289727 != 16777215 Nov 12 20:49:42.661083 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:49:42.661112 kernel: GPT:9289727 != 16777215 Nov 12 20:49:42.661129 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:49:42.661146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:49:42.796771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (458) Nov 12 20:49:42.830643 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (444) Nov 12 20:49:42.877943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:49:42.886853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:49:42.919934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 12 20:49:42.959140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:49:43.000418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 12 20:49:43.013557 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 20:49:43.023309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 12 20:49:43.025122 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 12 20:49:43.037002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:49:43.046296 disk-uuid[627]: Primary Header is updated. Nov 12 20:49:43.046296 disk-uuid[627]: Secondary Entries is updated. Nov 12 20:49:43.046296 disk-uuid[627]: Secondary Header is updated. Nov 12 20:49:43.051696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:49:43.057629 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:49:43.065632 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:49:44.069640 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 20:49:44.075102 disk-uuid[628]: The operation has completed successfully. Nov 12 20:49:44.300974 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:49:44.301255 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:49:44.337863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:49:44.343551 sh[971]: Success Nov 12 20:49:44.368793 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:49:44.459926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:49:44.469755 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:49:44.474996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:49:44.507796 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:49:44.507872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:49:44.507892 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:49:44.508769 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:49:44.509955 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:49:44.607870 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:49:44.622964 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:49:44.625139 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:49:44.635841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:49:44.656866 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:49:44.678282 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:49:44.678349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:49:44.678369 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:49:44.685823 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:49:44.698669 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:49:44.698490 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:49:44.717436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:49:44.727376 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:49:44.819206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:49:44.830857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:49:44.912102 systemd-networkd[1163]: lo: Link UP Nov 12 20:49:44.912116 systemd-networkd[1163]: lo: Gained carrier Nov 12 20:49:44.913991 systemd-networkd[1163]: Enumeration completed Nov 12 20:49:44.914117 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:49:44.915466 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:49:44.915471 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:49:44.917129 systemd[1]: Reached target network.target - Network. Nov 12 20:49:44.926564 systemd-networkd[1163]: eth0: Link UP Nov 12 20:49:44.926575 systemd-networkd[1163]: eth0: Gained carrier Nov 12 20:49:44.926667 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:49:44.936699 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.18.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 20:49:45.187869 ignition[1098]: Ignition 2.19.0 Nov 12 20:49:45.187884 ignition[1098]: Stage: fetch-offline Nov 12 20:49:45.188452 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:45.188467 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:45.189464 ignition[1098]: Ignition finished successfully Nov 12 20:49:45.195862 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:49:45.205122 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:49:45.228568 ignition[1172]: Ignition 2.19.0 Nov 12 20:49:45.228582 ignition[1172]: Stage: fetch Nov 12 20:49:45.229492 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:45.229505 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:45.229638 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:45.239876 ignition[1172]: PUT result: OK Nov 12 20:49:45.243613 ignition[1172]: parsed url from cmdline: "" Nov 12 20:49:45.243625 ignition[1172]: no config URL provided Nov 12 20:49:45.243636 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:49:45.243651 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:49:45.243683 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:45.248119 ignition[1172]: PUT result: OK Nov 12 20:49:45.249171 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 12 20:49:45.249809 ignition[1172]: GET result: OK Nov 12 20:49:45.251664 ignition[1172]: parsing config with SHA512: 653f8a39102ce6140b972a4906b086497e329cd85e20ea8109c832e9e80195a78c453e51d045c634d8af5baef2f7e82a766b8e33746240f4f257d54ebeb723a6 Nov 12 20:49:45.260855 unknown[1172]: fetched base config from "system" Nov 12 20:49:45.260872 unknown[1172]: fetched base config from "system" Nov 12 20:49:45.261427 ignition[1172]: fetch: fetch complete Nov 12 20:49:45.260881 unknown[1172]: fetched user config from "aws" Nov 12 20:49:45.261435 ignition[1172]: fetch: fetch passed Nov 12 20:49:45.264793 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:49:45.261498 ignition[1172]: Ignition finished successfully Nov 12 20:49:45.283835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:49:45.308921 ignition[1178]: Ignition 2.19.0 Nov 12 20:49:45.308936 ignition[1178]: Stage: kargs Nov 12 20:49:45.309391 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:45.309409 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:45.309515 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:45.311944 ignition[1178]: PUT result: OK Nov 12 20:49:45.323451 ignition[1178]: kargs: kargs passed Nov 12 20:49:45.323514 ignition[1178]: Ignition finished successfully Nov 12 20:49:45.327022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:49:45.330801 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:49:45.363656 ignition[1184]: Ignition 2.19.0 Nov 12 20:49:45.363670 ignition[1184]: Stage: disks Nov 12 20:49:45.364132 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:45.364144 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:45.364250 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:45.365695 ignition[1184]: PUT result: OK Nov 12 20:49:45.373172 ignition[1184]: disks: disks passed Nov 12 20:49:45.373259 ignition[1184]: Ignition finished successfully Nov 12 20:49:45.374600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:49:45.377070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:49:45.378997 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:49:45.381945 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:49:45.382379 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:49:45.382546 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:49:45.389814 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:49:45.428282 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:49:45.431735 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:49:45.445821 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:49:45.627636 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:49:45.628195 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:49:45.629857 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:49:45.646736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:49:45.649760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:49:45.653251 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:49:45.653318 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:49:45.653351 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:49:45.670963 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:49:45.674650 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1211) Nov 12 20:49:45.678545 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:49:45.678632 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:49:45.678683 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:49:45.681842 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:49:45.686622 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:49:45.687707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:49:46.149219 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:49:46.182548 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:49:46.196095 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:49:46.208184 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:49:46.612147 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:49:46.618979 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:49:46.623864 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:49:46.635640 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:49:46.634656 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:49:46.690408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:49:46.693203 ignition[1329]: INFO : Ignition 2.19.0 Nov 12 20:49:46.693203 ignition[1329]: INFO : Stage: mount Nov 12 20:49:46.695378 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:46.695378 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:46.695378 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:46.695378 ignition[1329]: INFO : PUT result: OK Nov 12 20:49:46.704091 ignition[1329]: INFO : mount: mount passed Nov 12 20:49:46.706180 ignition[1329]: INFO : Ignition finished successfully Nov 12 20:49:46.707974 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:49:46.715737 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:49:46.721721 systemd-networkd[1163]: eth0: Gained IPv6LL Nov 12 20:49:46.745904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:49:46.771632 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340) Nov 12 20:49:46.771699 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:49:46.773024 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:49:46.773148 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 20:49:46.779975 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 20:49:46.782753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:49:46.839934 ignition[1357]: INFO : Ignition 2.19.0 Nov 12 20:49:46.839934 ignition[1357]: INFO : Stage: files Nov 12 20:49:46.843705 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:46.843705 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:46.847571 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:46.849568 ignition[1357]: INFO : PUT result: OK Nov 12 20:49:46.854339 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:49:46.871033 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:49:46.871033 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:49:46.879301 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:49:46.881653 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:49:46.881653 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:49:46.881243 unknown[1357]: wrote ssh authorized keys file for user: core Nov 12 20:49:46.888849 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:49:46.892120 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:49:47.006930 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:49:47.143257 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:49:47.143257 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:49:47.147560 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 20:49:47.617423 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:49:48.137095 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:49:48.137095 ignition[1357]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:49:48.152285 ignition[1357]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:49:48.157064 ignition[1357]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:49:48.157064 ignition[1357]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:49:48.157064 ignition[1357]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:49:48.167174 ignition[1357]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:49:48.167174 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:49:48.172392 ignition[1357]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:49:48.172392 ignition[1357]: INFO : files: files passed Nov 12 20:49:48.172392 ignition[1357]: INFO : Ignition finished successfully Nov 12 20:49:48.176595 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:49:48.182979 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:49:48.186526 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:49:48.213112 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:49:48.213415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:49:48.255969 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:49:48.255969 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:49:48.265147 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:49:48.265388 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:49:48.272797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:49:48.280836 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:49:48.327778 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:49:48.327916 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:49:48.335016 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:49:48.341081 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:49:48.347002 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:49:48.363113 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:49:48.392456 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:49:48.398855 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:49:48.413933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:49:48.415415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:49:48.419739 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:49:48.420871 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:49:48.421032 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:49:48.427718 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:49:48.429251 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:49:48.434374 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:49:48.440175 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:49:48.440761 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:49:48.441592 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:49:48.442581 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:49:48.443103 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:49:48.443666 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:49:48.444110 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:49:48.444327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:49:48.444522 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:49:48.445858 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:49:48.446331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:49:48.446587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:49:48.461273 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:49:48.464448 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:49:48.464668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:49:48.468264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:49:48.468855 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:49:48.471228 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:49:48.471584 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:49:48.494202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:49:48.497905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:49:48.500734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:49:48.503037 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:49:48.509061 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:49:48.509232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:49:48.523805 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:49:48.523930 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:49:48.536652 ignition[1409]: INFO : Ignition 2.19.0 Nov 12 20:49:48.538208 ignition[1409]: INFO : Stage: umount Nov 12 20:49:48.540328 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:49:48.540328 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 20:49:48.540328 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 20:49:48.547123 ignition[1409]: INFO : PUT result: OK Nov 12 20:49:48.553236 ignition[1409]: INFO : umount: umount passed Nov 12 20:49:48.560170 ignition[1409]: INFO : Ignition finished successfully Nov 12 20:49:48.559951 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:49:48.562937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:49:48.570701 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:49:48.572946 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:49:48.573069 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:49:48.581696 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:49:48.582052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:49:48.585814 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:49:48.585894 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:49:48.590532 systemd[1]: Stopped target network.target - Network. Nov 12 20:49:48.595827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:49:48.595908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:49:48.597318 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:49:48.599283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:49:48.604781 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:49:48.608498 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:49:48.610709 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:49:48.611226 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:49:48.611292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:49:48.611370 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:49:48.611406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:49:48.611522 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:49:48.611680 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:49:48.611952 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:49:48.611995 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:49:48.612290 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:49:48.612546 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:49:48.622430 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:49:48.622573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:49:48.622761 systemd-networkd[1163]: eth0: DHCPv6 lease lost Nov 12 20:49:48.626517 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:49:48.626695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:49:48.631529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:49:48.631584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:49:48.643972 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:49:48.645888 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:49:48.646035 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:49:48.650271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:49:48.650352 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:49:48.652731 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:49:48.652798 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:49:48.654652 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:49:48.654711 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:49:48.666983 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:49:48.686774 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:49:48.686909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:49:48.690387 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:49:48.695441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:49:48.701276 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:49:48.701379 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:49:48.704559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:49:48.704664 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:49:48.707958 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:49:48.708100 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:49:48.715017 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:49:48.715273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:49:48.716906 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:49:48.716971 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:49:48.727828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:49:48.729096 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:49:48.729181 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:49:48.730692 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:49:48.730762 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:49:48.735344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:49:48.735429 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:49:48.737009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:49:48.737088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:49:48.740818 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:49:48.740960 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:49:48.744921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:49:48.745198 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:49:48.772348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:49:48.773585 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:49:48.773682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:49:48.781920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:49:48.818535 systemd[1]: Switching root. Nov 12 20:49:48.888024 systemd-journald[178]: Journal stopped Nov 12 20:49:51.414730 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 12 20:49:51.414809 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:49:51.414840 kernel: SELinux: policy capability open_perms=1 Nov 12 20:49:51.414862 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:49:51.414881 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:49:51.414898 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:49:51.414916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:49:51.414932 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:49:51.414948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:49:51.414965 kernel: audit: type=1403 audit(1731444589.932:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:49:51.414984 systemd[1]: Successfully loaded SELinux policy in 72.520ms. Nov 12 20:49:51.415019 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.179ms. Nov 12 20:49:51.415039 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:49:51.415155 systemd[1]: Detected virtualization amazon. Nov 12 20:49:51.415183 systemd[1]: Detected architecture x86-64. Nov 12 20:49:51.415202 systemd[1]: Detected first boot. Nov 12 20:49:51.415220 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:49:51.415239 zram_generator::config[1451]: No configuration found. Nov 12 20:49:51.415264 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:49:51.415287 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:49:51.415313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:49:51.415332 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:49:51.415352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:49:51.415370 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:49:51.415393 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:49:51.415412 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:49:51.415430 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:49:51.415448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:49:51.415470 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:49:51.415488 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:49:51.415513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:49:51.415532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:49:51.415549 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:49:51.415567 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:49:51.415586 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:49:51.419510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:49:51.419559 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:49:51.419588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:49:51.419637 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:49:51.419658 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:49:51.419677 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:49:51.419695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:49:51.419715 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:49:51.419735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:49:51.419755 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:49:51.419779 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:49:51.419798 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:49:51.419816 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:49:51.419834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:49:51.419853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:49:51.419872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:49:51.419893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:49:51.419911 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:49:51.419930 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:49:51.419952 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:49:51.419971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:51.419989 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:49:51.420007 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:49:51.420025 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:49:51.420046 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:49:51.420064 systemd[1]: Reached target machines.target - Containers. Nov 12 20:49:51.420083 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:49:51.420107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:49:51.420127 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:49:51.420145 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:49:51.420164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:49:51.420183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:49:51.420202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:49:51.420221 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:49:51.420239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:49:51.420259 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:49:51.420280 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:49:51.420298 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:49:51.420316 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:49:51.420334 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:49:51.420352 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:49:51.420371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:49:51.420390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:49:51.420409 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:49:51.420429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:49:51.420448 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:49:51.420467 systemd[1]: Stopped verity-setup.service. Nov 12 20:49:51.420485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:51.420504 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:49:51.420523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:49:51.420542 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:49:51.420560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:49:51.420579 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:49:51.420685 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:49:51.420708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:49:51.421663 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:49:51.421701 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:49:51.421722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:49:51.421748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:49:51.421769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:49:51.421790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:49:51.421810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:49:51.421834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:49:51.421858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:49:51.421880 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:49:51.421902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:49:51.422036 systemd-journald[1533]: Collecting audit messages is disabled. Nov 12 20:49:51.422086 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:49:51.422110 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:49:51.422186 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:49:51.422212 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:49:51.422235 systemd-journald[1533]: Journal started Nov 12 20:49:51.422280 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec26f03f24902bcbabb9ae4241c65413) is 4.8M, max 38.6M, 33.7M free. Nov 12 20:49:51.425657 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:49:50.923505 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:49:50.960940 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 12 20:49:50.961366 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:49:51.432759 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:49:51.444298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:49:51.446630 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:49:51.457630 kernel: fuse: init (API version 7.39) Nov 12 20:49:51.461952 kernel: ACPI: bus type drm_connector registered Nov 12 20:49:51.469622 kernel: loop: module loaded Nov 12 20:49:51.471624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:49:51.482299 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:49:51.495124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:49:51.501122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:49:51.513648 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:49:51.518070 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:49:51.521501 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:49:51.523640 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:49:51.523897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:49:51.525391 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:49:51.526173 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:49:51.528084 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:49:51.528814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:49:51.536077 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:49:51.564378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:49:51.584361 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Nov 12 20:49:51.584391 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Nov 12 20:49:51.602779 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:49:51.616337 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:49:51.611813 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:49:51.615792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:49:51.619736 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:49:51.624057 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:49:51.639080 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:49:51.643956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:49:51.645985 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:49:51.675847 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:49:51.677445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:49:51.681908 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec26f03f24902bcbabb9ae4241c65413 is 92.783ms for 971 entries. Nov 12 20:49:51.681908 systemd-journald[1533]: System Journal (/var/log/journal/ec26f03f24902bcbabb9ae4241c65413) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:49:51.800098 systemd-journald[1533]: Received client request to flush runtime journal. Nov 12 20:49:51.693821 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:49:51.778492 udevadm[1591]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:49:51.804699 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:49:51.806012 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:49:51.811798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:49:51.812677 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:49:51.831021 kernel: loop1: detected capacity change from 0 to 210664 Nov 12 20:49:51.831989 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:49:51.840923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:49:51.885206 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Nov 12 20:49:51.885640 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Nov 12 20:49:51.892499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:49:51.904639 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:49:52.091865 kernel: loop3: detected capacity change from 0 to 61336 Nov 12 20:49:52.266436 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:49:52.361665 kernel: loop5: detected capacity change from 0 to 210664 Nov 12 20:49:52.380634 kernel: loop6: detected capacity change from 0 to 142488 Nov 12 20:49:52.421685 kernel: loop7: detected capacity change from 0 to 61336 Nov 12 20:49:52.436698 (sd-merge)[1606]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 12 20:49:52.438462 (sd-merge)[1606]: Merged extensions into '/usr'. Nov 12 20:49:52.454758 systemd[1]: Reloading requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:49:52.454778 systemd[1]: Reloading... Nov 12 20:49:52.682706 zram_generator::config[1631]: No configuration found. Nov 12 20:49:53.051561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:53.145097 systemd[1]: Reloading finished in 689 ms. Nov 12 20:49:53.183563 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:49:53.193903 systemd[1]: Starting ensure-sysext.service... Nov 12 20:49:53.200312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:49:53.220058 systemd[1]: Reloading requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:49:53.220076 systemd[1]: Reloading... Nov 12 20:49:53.295968 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:49:53.296641 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:49:53.303574 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:49:53.306241 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Nov 12 20:49:53.308785 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Nov 12 20:49:53.322995 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:49:53.323544 systemd-tmpfiles[1681]: Skipping /boot Nov 12 20:49:53.377341 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:49:53.379818 systemd-tmpfiles[1681]: Skipping /boot Nov 12 20:49:53.418066 zram_generator::config[1708]: No configuration found. Nov 12 20:49:53.592641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:53.644260 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:49:53.673538 systemd[1]: Reloading finished in 452 ms. Nov 12 20:49:53.693761 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:49:53.702365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:49:53.715284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:49:53.740344 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:49:53.754151 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:49:53.761120 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:49:53.777748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:49:53.783882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:49:53.788821 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:49:53.808075 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:49:53.813842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.815163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:49:53.824774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:49:53.838079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:49:53.849067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:49:53.853531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:49:53.854927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.864672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.864976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:49:53.865619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:49:53.865775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.881242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.882562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:49:53.903257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:49:53.906680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:49:53.907043 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:49:53.908835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:49:53.928857 systemd-udevd[1766]: Using default interface naming scheme 'v255'. Nov 12 20:49:53.937489 systemd[1]: Finished ensure-sysext.service. Nov 12 20:49:53.970121 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:49:53.970311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:49:53.972366 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:49:53.973328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:49:53.985032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:49:53.985249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:49:53.992530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:49:53.996271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:49:53.999806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:49:54.000016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:49:54.004796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:49:54.016102 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:49:54.021146 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:49:54.022991 augenrules[1793]: No rules Nov 12 20:49:54.036949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:49:54.038310 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:49:54.038783 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:49:54.061046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:49:54.063493 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:49:54.082721 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:49:54.097243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:49:54.225700 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1821) Nov 12 20:49:54.231641 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1821) Nov 12 20:49:54.283648 systemd-resolved[1764]: Positive Trust Anchors: Nov 12 20:49:54.283670 systemd-resolved[1764]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:49:54.283718 systemd-resolved[1764]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:49:54.300924 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:49:54.302566 systemd-networkd[1812]: lo: Link UP Nov 12 20:49:54.302576 systemd-networkd[1812]: lo: Gained carrier Nov 12 20:49:54.303465 systemd-networkd[1812]: Enumeration completed Nov 12 20:49:54.303578 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:49:54.305692 (udev-worker)[1808]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:49:54.307051 systemd-resolved[1764]: Defaulting to hostname 'linux'. Nov 12 20:49:54.310161 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:49:54.313406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:49:54.314838 systemd[1]: Reached target network.target - Network. Nov 12 20:49:54.317727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:49:54.345635 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:49:54.348638 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Nov 12 20:49:54.357636 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:49:54.360639 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 12 20:49:54.370629 kernel: ACPI: button: Sleep Button [SLPF] Nov 12 20:49:54.377658 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:49:54.377672 systemd-networkd[1812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:49:54.382487 systemd-networkd[1812]: eth0: Link UP Nov 12 20:49:54.382819 systemd-networkd[1812]: eth0: Gained carrier Nov 12 20:49:54.382850 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:49:54.387715 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 20:49:54.394711 systemd-networkd[1812]: eth0: DHCPv4 address 172.31.18.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 20:49:54.435953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:49:54.445628 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:49:54.455740 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1822) Nov 12 20:49:54.609256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 20:49:54.727653 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:49:54.733875 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:49:54.738876 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:49:54.742658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:49:54.762551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:49:54.765074 lvm[1925]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:49:54.803809 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:49:54.807268 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:49:54.808566 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:49:54.810206 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:49:54.811740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:49:54.813506 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:49:54.815000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:49:54.816932 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:49:54.818378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:49:54.818422 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:49:54.819419 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:49:54.821248 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:49:54.824129 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:49:54.833105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:49:54.841907 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:49:54.847253 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:49:54.851499 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:49:54.855931 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:49:54.857933 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:49:54.857993 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:49:54.871837 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:49:54.880950 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:49:54.890680 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:49:54.893964 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:49:54.905903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:49:54.912025 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:49:54.913281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:49:54.917850 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:49:54.930094 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 20:49:54.943046 jq[1937]: false Nov 12 20:49:54.943815 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:49:54.951801 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 20:49:54.956888 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:49:54.973666 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:49:55.009174 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:49:55.011600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:49:55.012301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:49:55.018854 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:49:55.022877 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:49:55.027200 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:49:55.038163 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:49:55.038419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:49:55.056794 jq[1951]: true Nov 12 20:49:55.113662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:49:55.142753 extend-filesystems[1938]: Found loop4 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found loop5 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found loop6 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found loop7 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p1 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p2 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p3 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found usr Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p4 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p6 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p7 Nov 12 20:49:55.142753 extend-filesystems[1938]: Found nvme0n1p9 Nov 12 20:49:55.142753 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Nov 12 20:49:55.128106 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:49:55.171844 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: ---------------------------------------------------- Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: corporation. Support and training for ntp-4 are Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: available at https://www.nwtime.org/support Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: ---------------------------------------------------- Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: proto: precision = 0.064 usec (-24) Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: basedate set to 2024-10-31 Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: gps base set to 2024-11-03 (week 2339) Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:49:55.288532 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:49:55.289273 update_engine[1948]: I20241112 20:49:55.244040 1948 main.cc:92] Flatcar Update Engine starting Nov 12 20:49:55.150346 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:49:55.171872 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:49:55.171883 ntpd[1940]: ---------------------------------------------------- Nov 12 20:49:55.290184 jq[1956]: true Nov 12 20:49:55.171894 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:49:55.171904 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:49:55.171915 ntpd[1940]: corporation. Support and training for ntp-4 are Nov 12 20:49:55.171925 ntpd[1940]: available at https://www.nwtime.org/support Nov 12 20:49:55.171935 ntpd[1940]: ---------------------------------------------------- Nov 12 20:49:55.187384 ntpd[1940]: proto: precision = 0.064 usec (-24) Nov 12 20:49:55.187763 ntpd[1940]: basedate set to 2024-10-31 Nov 12 20:49:55.187779 ntpd[1940]: gps base set to 2024-11-03 (week 2339) Nov 12 20:49:55.271492 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:49:55.271565 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:49:55.291822 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listen normally on 3 eth0 172.31.18.222:123 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listen normally on 4 lo [::1]:123 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: bind(21) AF_INET6 fe80::4a0:e9ff:fe5c:ce2d%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: unable to create socket on eth0 (5) for fe80::4a0:e9ff:fe5c:ce2d%2#123 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: failed to init interface for address fe80::4a0:e9ff:fe5c:ce2d%2 Nov 12 20:49:55.293915 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Nov 12 20:49:55.291881 ntpd[1940]: Listen normally on 3 eth0 172.31.18.222:123 Nov 12 20:49:55.291936 ntpd[1940]: Listen normally on 4 lo [::1]:123 Nov 12 20:49:55.292002 ntpd[1940]: bind(21) AF_INET6 fe80::4a0:e9ff:fe5c:ce2d%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 20:49:55.292027 ntpd[1940]: unable to create socket on eth0 (5) for fe80::4a0:e9ff:fe5c:ce2d%2#123 Nov 12 20:49:55.292044 ntpd[1940]: failed to init interface for address fe80::4a0:e9ff:fe5c:ce2d%2 Nov 12 20:49:55.292079 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Nov 12 20:49:55.307878 (ntainerd)[1962]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:49:55.344769 dbus-daemon[1936]: [system] SELinux support is enabled Nov 12 20:49:55.345251 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:49:55.356213 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:49:55.364689 tar[1953]: linux-amd64/helm Nov 12 20:49:55.371675 update_engine[1948]: I20241112 20:49:55.356285 1948 update_check_scheduler.cc:74] Next update check in 5m54s Nov 12 20:49:55.356461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:49:55.371822 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:49:55.371822 ntpd[1940]: 12 Nov 20:49:55 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:49:55.369553 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:49:55.371989 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Nov 12 20:49:55.359315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:49:55.369590 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:49:55.359384 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:49:55.361635 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:49:55.361667 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:49:55.370222 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:49:55.381205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:49:55.396702 extend-filesystems[1987]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:49:55.403730 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Nov 12 20:49:55.390590 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1812 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 20:49:55.409427 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 20:49:55.411377 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 20:49:55.443563 systemd-logind[1947]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:49:55.443593 systemd-logind[1947]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 12 20:49:55.443628 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:49:55.453763 systemd-logind[1947]: New seat seat0. Nov 12 20:49:55.462406 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:49:55.472663 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Nov 12 20:49:55.520618 extend-filesystems[1987]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 12 20:49:55.520618 extend-filesystems[1987]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:49:55.520618 extend-filesystems[1987]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Nov 12 20:49:55.530896 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Nov 12 20:49:55.543631 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:49:55.543881 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:49:55.547710 bash[2013]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:49:55.555773 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:49:55.565567 coreos-metadata[1935]: Nov 12 20:49:55.565 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 20:49:55.570781 coreos-metadata[1935]: Nov 12 20:49:55.568 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 12 20:49:55.570698 systemd[1]: Starting sshkeys.service... Nov 12 20:49:55.572112 coreos-metadata[1935]: Nov 12 20:49:55.571 INFO Fetch successful Nov 12 20:49:55.572112 coreos-metadata[1935]: Nov 12 20:49:55.571 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 12 20:49:55.572839 coreos-metadata[1935]: Nov 12 20:49:55.572 INFO Fetch successful Nov 12 20:49:55.576790 coreos-metadata[1935]: Nov 12 20:49:55.572 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 12 20:49:55.590331 coreos-metadata[1935]: Nov 12 20:49:55.586 INFO Fetch successful Nov 12 20:49:55.590331 coreos-metadata[1935]: Nov 12 20:49:55.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 12 20:49:55.590331 coreos-metadata[1935]: Nov 12 20:49:55.588 INFO Fetch successful Nov 12 20:49:55.590331 coreos-metadata[1935]: Nov 12 20:49:55.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 12 20:49:55.596450 coreos-metadata[1935]: Nov 12 20:49:55.591 INFO Fetch failed with 404: resource not found Nov 12 20:49:55.596450 coreos-metadata[1935]: Nov 12 20:49:55.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 12 20:49:55.596450 coreos-metadata[1935]: Nov 12 20:49:55.594 INFO Fetch successful Nov 12 20:49:55.596450 coreos-metadata[1935]: Nov 12 20:49:55.594 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 12 20:49:55.600831 coreos-metadata[1935]: Nov 12 20:49:55.600 INFO Fetch successful Nov 12 20:49:55.600831 coreos-metadata[1935]: Nov 12 20:49:55.600 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 12 20:49:55.603929 coreos-metadata[1935]: Nov 12 20:49:55.603 INFO Fetch successful Nov 12 20:49:55.603929 coreos-metadata[1935]: Nov 12 20:49:55.603 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 12 20:49:55.617632 coreos-metadata[1935]: Nov 12 20:49:55.613 INFO Fetch successful Nov 12 20:49:55.617632 coreos-metadata[1935]: Nov 12 20:49:55.613 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 12 20:49:55.617632 coreos-metadata[1935]: Nov 12 20:49:55.615 INFO Fetch successful Nov 12 20:49:55.654841 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1815) Nov 12 20:49:55.711745 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:49:55.732821 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:49:55.735108 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:49:55.749515 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:49:55.812322 systemd-networkd[1812]: eth0: Gained IPv6LL Nov 12 20:49:55.824970 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:49:55.827630 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:49:55.838234 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 12 20:49:55.847148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:55.858833 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:49:56.007386 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:49:56.131123 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 20:49:56.131326 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 20:49:56.156936 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1990 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 20:49:56.161806 coreos-metadata[2030]: Nov 12 20:49:56.161 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 20:49:56.165642 coreos-metadata[2030]: Nov 12 20:49:56.165 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 12 20:49:56.168118 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 20:49:56.172823 coreos-metadata[2030]: Nov 12 20:49:56.172 INFO Fetch successful Nov 12 20:49:56.172823 coreos-metadata[2030]: Nov 12 20:49:56.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 12 20:49:56.176687 coreos-metadata[2030]: Nov 12 20:49:56.176 INFO Fetch successful Nov 12 20:49:56.191733 unknown[2030]: wrote ssh authorized keys file for user: core Nov 12 20:49:56.226915 locksmithd[1988]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:49:56.265979 polkitd[2130]: Started polkitd version 121 Nov 12 20:49:56.291590 amazon-ssm-agent[2069]: Initializing new seelog logger Nov 12 20:49:56.292595 amazon-ssm-agent[2069]: New Seelog Logger Creation Complete Nov 12 20:49:56.293340 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.293340 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.295448 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 processing appconfig overrides Nov 12 20:49:56.305234 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:49:56.306379 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:49:56.313033 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO Proxy environment variables: Nov 12 20:49:56.318811 polkitd[2130]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 20:49:56.318912 polkitd[2130]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 20:49:56.321057 systemd[1]: Finished sshkeys.service. Nov 12 20:49:56.324304 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.324304 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.324304 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 processing appconfig overrides Nov 12 20:49:56.324495 polkitd[2130]: Finished loading, compiling and executing 2 rules Nov 12 20:49:56.327919 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.327919 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.327919 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 processing appconfig overrides Nov 12 20:49:56.328528 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 20:49:56.328811 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 20:49:56.329771 polkitd[2130]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 20:49:56.345746 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.345746 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 20:49:56.345746 amazon-ssm-agent[2069]: 2024/11/12 20:49:56 processing appconfig overrides Nov 12 20:49:56.400962 systemd-hostnamed[1990]: Hostname set to (transient) Nov 12 20:49:56.401664 systemd-resolved[1764]: System hostname changed to 'ip-172-31-18-222'. Nov 12 20:49:56.427624 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO http_proxy: Nov 12 20:49:56.533066 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO no_proxy: Nov 12 20:49:56.559416 sshd_keygen[1991]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:49:56.602378 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:49:56.609981 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:49:56.619692 systemd[1]: Started sshd@0-172.31.18.222:22-139.178.89.65:34762.service - OpenSSH per-connection server daemon (139.178.89.65:34762). Nov 12 20:49:56.633302 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:49:56.633582 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:49:56.635150 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO https_proxy: Nov 12 20:49:56.646759 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:49:56.651795 containerd[1962]: time="2024-11-12T20:49:56.651701685Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:49:56.713072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:49:56.727176 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:49:56.741574 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO Checking if agent identity type OnPrem can be assumed Nov 12 20:49:56.739736 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:49:56.742125 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:49:56.821110 containerd[1962]: time="2024-11-12T20:49:56.820971933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.823338 containerd[1962]: time="2024-11-12T20:49:56.823287206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:49:56.823493 containerd[1962]: time="2024-11-12T20:49:56.823475952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:49:56.823566 containerd[1962]: time="2024-11-12T20:49:56.823553333Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:49:56.823857 containerd[1962]: time="2024-11-12T20:49:56.823829371Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:49:56.823971 containerd[1962]: time="2024-11-12T20:49:56.823955389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824087165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824111673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824353964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824375389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824397748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824414441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.824651 containerd[1962]: time="2024-11-12T20:49:56.824518459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.825212 containerd[1962]: time="2024-11-12T20:49:56.825188996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:49:56.825494 containerd[1962]: time="2024-11-12T20:49:56.825469668Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:49:56.825572 containerd[1962]: time="2024-11-12T20:49:56.825557814Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:49:56.825935 containerd[1962]: time="2024-11-12T20:49:56.825807893Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:49:56.826059 containerd[1962]: time="2024-11-12T20:49:56.826023472Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.833993463Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.834082884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.834110386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.834194000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.834218193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:49:56.834631 containerd[1962]: time="2024-11-12T20:49:56.834404419Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837210317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837423071Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837452208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837473434Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837496002Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837528320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837547354Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837567665Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837588967Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837629985Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837650932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837669786Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837701016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838209 containerd[1962]: time="2024-11-12T20:49:56.837721468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837739892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837852825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837877329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837898480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837916307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837936108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837965498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.837988431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838007563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838037332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838056362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838078552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838110772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838128184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.838871 containerd[1962]: time="2024-11-12T20:49:56.838144992Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839534035Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839676901Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839699097Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839719804Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839735156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839757232Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839771396Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:49:56.840723 containerd[1962]: time="2024-11-12T20:49:56.839784871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:49:56.841100 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO Checking if agent identity type EC2 can be assumed Nov 12 20:49:56.841149 containerd[1962]: time="2024-11-12T20:49:56.840200476Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:49:56.841149 containerd[1962]: time="2024-11-12T20:49:56.840296625Z" level=info msg="Connect containerd service" Nov 12 20:49:56.841149 containerd[1962]: time="2024-11-12T20:49:56.840349109Z" level=info msg="using legacy CRI server" Nov 12 20:49:56.841149 containerd[1962]: time="2024-11-12T20:49:56.840360299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:49:56.841149 containerd[1962]: time="2024-11-12T20:49:56.840498799Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842240163Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842414780Z" level=info msg="Start subscribing containerd event" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842476803Z" level=info msg="Start recovering state" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842552571Z" level=info msg="Start event monitor" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842570928Z" level=info msg="Start snapshots syncer" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842582573Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:49:56.843072 containerd[1962]: time="2024-11-12T20:49:56.842592160Z" level=info msg="Start streaming server" Nov 12 20:49:56.844684 containerd[1962]: time="2024-11-12T20:49:56.843553674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:49:56.844684 containerd[1962]: time="2024-11-12T20:49:56.843667051Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:49:56.843858 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:49:56.845183 containerd[1962]: time="2024-11-12T20:49:56.844886396Z" level=info msg="containerd successfully booted in 0.194842s" Nov 12 20:49:56.939137 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO Agent will take identity from EC2 Nov 12 20:49:56.946828 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 34762 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:49:56.951930 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:56.977667 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:49:56.993744 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:49:57.005926 systemd-logind[1947]: New session 1 of user core. Nov 12 20:49:57.031361 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:49:57.038724 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:49:57.045160 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:49:57.058034 (systemd)[2181]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:49:57.138534 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:49:57.239285 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 20:49:57.328694 systemd[2181]: Queued start job for default target default.target. Nov 12 20:49:57.335175 systemd[2181]: Created slice app.slice - User Application Slice. Nov 12 20:49:57.335215 systemd[2181]: Reached target paths.target - Paths. Nov 12 20:49:57.335237 systemd[2181]: Reached target timers.target - Timers. Nov 12 20:49:57.339778 systemd[2181]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:49:57.343051 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 12 20:49:57.373722 systemd[2181]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:49:57.373912 systemd[2181]: Reached target sockets.target - Sockets. Nov 12 20:49:57.373942 systemd[2181]: Reached target basic.target - Basic System. Nov 12 20:49:57.374003 systemd[2181]: Reached target default.target - Main User Target. Nov 12 20:49:57.374042 systemd[2181]: Startup finished in 298ms. Nov 12 20:49:57.374148 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:49:57.383262 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:49:57.443751 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] Starting Core Agent Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [Registrar] Starting registrar module Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:56 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:57 INFO [EC2Identity] EC2 registration was successful. Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:57 INFO [CredentialRefresher] credentialRefresher has started Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:57 INFO [CredentialRefresher] Starting credentials refresher loop Nov 12 20:49:57.448594 amazon-ssm-agent[2069]: 2024-11-12 20:49:57 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 12 20:49:57.473932 tar[1953]: linux-amd64/LICENSE Nov 12 20:49:57.474405 tar[1953]: linux-amd64/README.md Nov 12 20:49:57.503509 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:49:57.542550 amazon-ssm-agent[2069]: 2024-11-12 20:49:57 INFO [CredentialRefresher] Next credential rotation will be in 30.991630991666668 minutes Nov 12 20:49:57.547985 systemd[1]: Started sshd@1-172.31.18.222:22-139.178.89.65:34902.service - OpenSSH per-connection server daemon (139.178.89.65:34902). Nov 12 20:49:57.742835 sshd[2195]: Accepted publickey for core from 139.178.89.65 port 34902 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:49:57.751342 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:57.765652 systemd-logind[1947]: New session 2 of user core. Nov 12 20:49:57.771884 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:49:57.899294 sshd[2195]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:57.903232 systemd[1]: sshd@1-172.31.18.222:22-139.178.89.65:34902.service: Deactivated successfully. Nov 12 20:49:57.906178 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:49:57.908844 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:49:57.910183 systemd-logind[1947]: Removed session 2. Nov 12 20:49:57.934456 systemd[1]: Started sshd@2-172.31.18.222:22-139.178.89.65:34904.service - OpenSSH per-connection server daemon (139.178.89.65:34904). Nov 12 20:49:58.007486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:58.012253 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:49:58.013211 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:49:58.015747 systemd[1]: Startup finished in 846ms (kernel) + 9.208s (initrd) + 8.151s (userspace) = 18.206s. Nov 12 20:49:58.126641 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 34904 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:49:58.128584 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:58.142681 systemd-logind[1947]: New session 3 of user core. Nov 12 20:49:58.148223 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:49:58.172846 ntpd[1940]: Listen normally on 6 eth0 [fe80::4a0:e9ff:fe5c:ce2d%2]:123 Nov 12 20:49:58.173991 ntpd[1940]: 12 Nov 20:49:58 ntpd[1940]: Listen normally on 6 eth0 [fe80::4a0:e9ff:fe5c:ce2d%2]:123 Nov 12 20:49:58.283559 sshd[2202]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:58.288053 systemd[1]: sshd@2-172.31.18.222:22-139.178.89.65:34904.service: Deactivated successfully. Nov 12 20:49:58.292199 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:49:58.300337 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:49:58.302289 systemd-logind[1947]: Removed session 3. Nov 12 20:49:58.464950 amazon-ssm-agent[2069]: 2024-11-12 20:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 12 20:49:58.565629 amazon-ssm-agent[2069]: 2024-11-12 20:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2223) started Nov 12 20:49:58.667297 amazon-ssm-agent[2069]: 2024-11-12 20:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 12 20:49:58.860681 kubelet[2208]: E1112 20:49:58.860539 2208 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:49:58.864379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:49:58.864571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:49:58.864923 systemd[1]: kubelet.service: Consumed 1.088s CPU time. Nov 12 20:50:02.479196 systemd-resolved[1764]: Clock change detected. Flushing caches. Nov 12 20:50:08.631474 systemd[1]: Started sshd@3-172.31.18.222:22-139.178.89.65:55822.service - OpenSSH per-connection server daemon (139.178.89.65:55822). Nov 12 20:50:08.810180 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 55822 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:08.811667 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:08.816430 systemd-logind[1947]: New session 4 of user core. Nov 12 20:50:08.827365 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:50:08.947769 sshd[2238]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:08.951629 systemd[1]: sshd@3-172.31.18.222:22-139.178.89.65:55822.service: Deactivated successfully. Nov 12 20:50:08.953827 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:50:08.955627 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:50:08.956781 systemd-logind[1947]: Removed session 4. Nov 12 20:50:08.989297 systemd[1]: Started sshd@4-172.31.18.222:22-139.178.89.65:55826.service - OpenSSH per-connection server daemon (139.178.89.65:55826). Nov 12 20:50:09.168746 sshd[2245]: Accepted publickey for core from 139.178.89.65 port 55826 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:09.170620 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:09.171600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:50:09.180714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:09.186951 systemd-logind[1947]: New session 5 of user core. Nov 12 20:50:09.197715 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:50:09.321317 sshd[2245]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:09.329053 systemd[1]: sshd@4-172.31.18.222:22-139.178.89.65:55826.service: Deactivated successfully. Nov 12 20:50:09.331638 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:50:09.333877 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:50:09.335001 systemd-logind[1947]: Removed session 5. Nov 12 20:50:09.358488 systemd[1]: Started sshd@5-172.31.18.222:22-139.178.89.65:55834.service - OpenSSH per-connection server daemon (139.178.89.65:55834). Nov 12 20:50:09.529637 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 55834 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:09.532032 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:09.557277 systemd-logind[1947]: New session 6 of user core. Nov 12 20:50:09.567419 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:50:09.654167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:09.666733 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:50:09.699341 sshd[2255]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:09.707455 systemd[1]: sshd@5-172.31.18.222:22-139.178.89.65:55834.service: Deactivated successfully. Nov 12 20:50:09.710850 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:50:09.711989 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:50:09.732633 systemd[1]: Started sshd@6-172.31.18.222:22-139.178.89.65:55846.service - OpenSSH per-connection server daemon (139.178.89.65:55846). Nov 12 20:50:09.733468 systemd-logind[1947]: Removed session 6. Nov 12 20:50:09.761466 kubelet[2263]: E1112 20:50:09.761394 2263 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:50:09.766596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:50:09.766787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:50:09.910904 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 55846 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:09.912339 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:09.929196 systemd-logind[1947]: New session 7 of user core. Nov 12 20:50:09.937340 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:50:10.116904 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:50:10.117695 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:50:10.139436 sudo[2278]: pam_unix(sudo:session): session closed for user root Nov 12 20:50:10.164508 sshd[2274]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:10.171954 systemd[1]: sshd@6-172.31.18.222:22-139.178.89.65:55846.service: Deactivated successfully. Nov 12 20:50:10.174683 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:50:10.178813 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:50:10.180480 systemd-logind[1947]: Removed session 7. Nov 12 20:50:10.209951 systemd[1]: Started sshd@7-172.31.18.222:22-139.178.89.65:55862.service - OpenSSH per-connection server daemon (139.178.89.65:55862). Nov 12 20:50:10.376166 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 55862 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:10.377914 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:10.384904 systemd-logind[1947]: New session 8 of user core. Nov 12 20:50:10.394212 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:50:10.508186 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:50:10.508719 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:50:10.520919 sudo[2287]: pam_unix(sudo:session): session closed for user root Nov 12 20:50:10.532461 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:50:10.532951 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:50:10.561464 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:50:10.572475 auditctl[2290]: No rules Nov 12 20:50:10.573098 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:50:10.573521 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:50:10.582142 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:50:10.649901 augenrules[2308]: No rules Nov 12 20:50:10.651911 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:50:10.653711 sudo[2286]: pam_unix(sudo:session): session closed for user root Nov 12 20:50:10.676789 sshd[2283]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:10.680475 systemd[1]: sshd@7-172.31.18.222:22-139.178.89.65:55862.service: Deactivated successfully. Nov 12 20:50:10.682963 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:50:10.684619 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:50:10.687315 systemd-logind[1947]: Removed session 8. Nov 12 20:50:10.712930 systemd[1]: Started sshd@8-172.31.18.222:22-139.178.89.65:55866.service - OpenSSH per-connection server daemon (139.178.89.65:55866). Nov 12 20:50:10.899812 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 55866 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:50:10.901550 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:10.912025 systemd-logind[1947]: New session 9 of user core. Nov 12 20:50:10.923619 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:50:11.046487 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:50:11.046897 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:50:11.889294 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:50:11.900834 (dockerd)[2335]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:50:12.497020 dockerd[2335]: time="2024-11-12T20:50:12.496954688Z" level=info msg="Starting up" Nov 12 20:50:12.690697 dockerd[2335]: time="2024-11-12T20:50:12.690640522Z" level=info msg="Loading containers: start." Nov 12 20:50:12.878111 kernel: Initializing XFRM netlink socket Nov 12 20:50:12.973337 (udev-worker)[2356]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:50:13.068729 systemd-networkd[1812]: docker0: Link UP Nov 12 20:50:13.089292 dockerd[2335]: time="2024-11-12T20:50:13.089241711Z" level=info msg="Loading containers: done." Nov 12 20:50:13.116847 dockerd[2335]: time="2024-11-12T20:50:13.116799927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:50:13.117150 dockerd[2335]: time="2024-11-12T20:50:13.116934561Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:50:13.117150 dockerd[2335]: time="2024-11-12T20:50:13.117095331Z" level=info msg="Daemon has completed initialization" Nov 12 20:50:13.208657 dockerd[2335]: time="2024-11-12T20:50:13.203583075Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:50:13.203902 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:50:14.487972 containerd[1962]: time="2024-11-12T20:50:14.487426917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 20:50:15.155828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832850687.mount: Deactivated successfully. Nov 12 20:50:18.582684 containerd[1962]: time="2024-11-12T20:50:18.582628486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:18.584893 containerd[1962]: time="2024-11-12T20:50:18.584782959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676443" Nov 12 20:50:18.587104 containerd[1962]: time="2024-11-12T20:50:18.587019065Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:18.593089 containerd[1962]: time="2024-11-12T20:50:18.590938505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:18.594663 containerd[1962]: time="2024-11-12T20:50:18.594603405Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 4.107123852s" Nov 12 20:50:18.594782 containerd[1962]: time="2024-11-12T20:50:18.594671337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 20:50:18.629938 containerd[1962]: time="2024-11-12T20:50:18.629888524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 20:50:19.826451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:50:19.834382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:20.292194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:20.305585 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:50:20.367626 kubelet[2546]: E1112 20:50:20.367565 2546 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:50:20.371249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:50:20.371469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:50:22.628527 containerd[1962]: time="2024-11-12T20:50:22.628474544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:22.629872 containerd[1962]: time="2024-11-12T20:50:22.629724961Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605796" Nov 12 20:50:22.632547 containerd[1962]: time="2024-11-12T20:50:22.631227416Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:22.634326 containerd[1962]: time="2024-11-12T20:50:22.634285272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:22.635398 containerd[1962]: time="2024-11-12T20:50:22.635360283Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 4.005430611s" Nov 12 20:50:22.635540 containerd[1962]: time="2024-11-12T20:50:22.635518415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 20:50:22.700987 containerd[1962]: time="2024-11-12T20:50:22.700948620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 20:50:25.228090 containerd[1962]: time="2024-11-12T20:50:25.227321568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:25.238900 containerd[1962]: time="2024-11-12T20:50:25.238843259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784244" Nov 12 20:50:25.257240 containerd[1962]: time="2024-11-12T20:50:25.257144197Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:25.276870 containerd[1962]: time="2024-11-12T20:50:25.276782950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:25.279362 containerd[1962]: time="2024-11-12T20:50:25.277819021Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 2.576827572s" Nov 12 20:50:25.279362 containerd[1962]: time="2024-11-12T20:50:25.277868007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 20:50:25.319122 containerd[1962]: time="2024-11-12T20:50:25.319084899Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 20:50:26.516704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203664655.mount: Deactivated successfully. Nov 12 20:50:26.744268 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 20:50:27.266259 containerd[1962]: time="2024-11-12T20:50:27.266206678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:27.269728 containerd[1962]: time="2024-11-12T20:50:27.269676760Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054624" Nov 12 20:50:27.271801 containerd[1962]: time="2024-11-12T20:50:27.271664123Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:27.275520 containerd[1962]: time="2024-11-12T20:50:27.275468643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:27.276976 containerd[1962]: time="2024-11-12T20:50:27.276805717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 1.957676999s" Nov 12 20:50:27.276976 containerd[1962]: time="2024-11-12T20:50:27.276856499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 20:50:27.311559 containerd[1962]: time="2024-11-12T20:50:27.311517380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:50:27.965408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601596266.mount: Deactivated successfully. Nov 12 20:50:29.363722 containerd[1962]: time="2024-11-12T20:50:29.363662437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.365802 containerd[1962]: time="2024-11-12T20:50:29.365757445Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:50:29.367960 containerd[1962]: time="2024-11-12T20:50:29.367470747Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.371873 containerd[1962]: time="2024-11-12T20:50:29.371826373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.373253 containerd[1962]: time="2024-11-12T20:50:29.373206878Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.061642549s" Nov 12 20:50:29.373380 containerd[1962]: time="2024-11-12T20:50:29.373258319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:50:29.403235 containerd[1962]: time="2024-11-12T20:50:29.403195813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:50:29.951212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281946602.mount: Deactivated successfully. Nov 12 20:50:29.959523 containerd[1962]: time="2024-11-12T20:50:29.959466863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.960857 containerd[1962]: time="2024-11-12T20:50:29.960690932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:50:29.963718 containerd[1962]: time="2024-11-12T20:50:29.962171218Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.966653 containerd[1962]: time="2024-11-12T20:50:29.965559577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:29.966653 containerd[1962]: time="2024-11-12T20:50:29.966487416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 563.248794ms" Nov 12 20:50:29.966653 containerd[1962]: time="2024-11-12T20:50:29.966528826Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:50:30.022949 containerd[1962]: time="2024-11-12T20:50:30.022889120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 20:50:30.542108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:50:30.552452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:30.576045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154157445.mount: Deactivated successfully. Nov 12 20:50:30.987720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:31.023586 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:50:31.180670 kubelet[2661]: E1112 20:50:31.177852 2661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:50:31.189825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:50:31.190030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:50:34.297784 containerd[1962]: time="2024-11-12T20:50:34.297719686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.301607 containerd[1962]: time="2024-11-12T20:50:34.301007860Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Nov 12 20:50:34.303279 containerd[1962]: time="2024-11-12T20:50:34.303232927Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.307784 containerd[1962]: time="2024-11-12T20:50:34.307714284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:50:34.309596 containerd[1962]: time="2024-11-12T20:50:34.309271556Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.286090622s" Nov 12 20:50:34.309596 containerd[1962]: time="2024-11-12T20:50:34.309329513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 20:50:38.186925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:38.198707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:38.228990 systemd[1]: Reloading requested from client PID 2773 ('systemctl') (unit session-9.scope)... Nov 12 20:50:38.229018 systemd[1]: Reloading... Nov 12 20:50:38.441102 zram_generator::config[2813]: No configuration found. Nov 12 20:50:38.615354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:50:38.815720 systemd[1]: Reloading finished in 585 ms. Nov 12 20:50:38.876485 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:50:38.876891 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:50:38.877311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:38.885514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:39.265409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:39.278922 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:50:39.356007 kubelet[2874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:50:39.356839 kubelet[2874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:50:39.356839 kubelet[2874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:50:39.361547 kubelet[2874]: I1112 20:50:39.361331 2874 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:50:39.878787 kubelet[2874]: I1112 20:50:39.878711 2874 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:50:39.878787 kubelet[2874]: I1112 20:50:39.878775 2874 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:50:39.879128 kubelet[2874]: I1112 20:50:39.879058 2874 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:50:39.926892 kubelet[2874]: I1112 20:50:39.926841 2874 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:50:39.933090 kubelet[2874]: E1112 20:50:39.933046 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:39.950299 kubelet[2874]: I1112 20:50:39.950254 2874 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:50:39.954326 kubelet[2874]: I1112 20:50:39.953859 2874 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:50:39.962203 kubelet[2874]: I1112 20:50:39.954322 2874 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:50:39.963059 kubelet[2874]: I1112 20:50:39.963020 2874 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:50:39.963059 kubelet[2874]: I1112 20:50:39.963061 2874 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:50:39.963272 kubelet[2874]: I1112 20:50:39.963248 2874 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:50:39.965891 kubelet[2874]: W1112 20:50:39.965666 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-222&limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:39.965891 kubelet[2874]: E1112 20:50:39.965853 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-222&limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:39.969544 kubelet[2874]: I1112 20:50:39.969472 2874 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:50:39.969544 kubelet[2874]: I1112 20:50:39.969539 2874 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:50:39.969722 kubelet[2874]: I1112 20:50:39.969593 2874 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:50:39.969722 kubelet[2874]: I1112 20:50:39.969617 2874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:50:39.978015 kubelet[2874]: W1112 20:50:39.977814 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:39.978015 kubelet[2874]: E1112 20:50:39.977890 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:39.979351 kubelet[2874]: I1112 20:50:39.979320 2874 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:50:39.984634 kubelet[2874]: I1112 20:50:39.984587 2874 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:50:39.984774 kubelet[2874]: W1112 20:50:39.984684 2874 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:50:39.986274 kubelet[2874]: I1112 20:50:39.985997 2874 server.go:1264] "Started kubelet" Nov 12 20:50:40.006272 kubelet[2874]: E1112 20:50:40.006148 2874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.222:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-222.180753b9fa9e1773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-222,UID:ip-172-31-18-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-222,},FirstTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,LastTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-222,}" Nov 12 20:50:40.011878 kubelet[2874]: I1112 20:50:40.010381 2874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:50:40.015991 kubelet[2874]: I1112 20:50:40.015237 2874 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:50:40.020636 kubelet[2874]: I1112 20:50:40.017288 2874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:50:40.020636 kubelet[2874]: I1112 20:50:40.019148 2874 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:50:40.029152 kubelet[2874]: I1112 20:50:40.025481 2874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:50:40.032522 kubelet[2874]: I1112 20:50:40.032489 2874 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:50:40.040488 kubelet[2874]: I1112 20:50:40.039928 2874 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:50:40.040488 kubelet[2874]: I1112 20:50:40.040057 2874 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:50:40.040681 kubelet[2874]: E1112 20:50:40.040549 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": dial tcp 172.31.18.222:6443: connect: connection refused" interval="200ms" Nov 12 20:50:40.051816 kubelet[2874]: I1112 20:50:40.048725 2874 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:50:40.051816 kubelet[2874]: I1112 20:50:40.049632 2874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:50:40.051816 kubelet[2874]: W1112 20:50:40.051026 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:40.051816 kubelet[2874]: E1112 20:50:40.051167 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:40.055785 kubelet[2874]: I1112 20:50:40.055755 2874 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:50:40.056059 kubelet[2874]: E1112 20:50:40.056032 2874 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:50:40.108663 kubelet[2874]: I1112 20:50:40.108639 2874 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:50:40.108837 kubelet[2874]: I1112 20:50:40.108825 2874 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:50:40.108961 kubelet[2874]: I1112 20:50:40.108951 2874 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:50:40.119197 kubelet[2874]: I1112 20:50:40.119164 2874 policy_none.go:49] "None policy: Start" Nov 12 20:50:40.122946 kubelet[2874]: I1112 20:50:40.122896 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:50:40.124745 kubelet[2874]: I1112 20:50:40.124719 2874 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:50:40.124745 kubelet[2874]: I1112 20:50:40.124752 2874 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:50:40.134159 kubelet[2874]: I1112 20:50:40.133772 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:50:40.134159 kubelet[2874]: I1112 20:50:40.133801 2874 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:50:40.135253 kubelet[2874]: I1112 20:50:40.134452 2874 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:50:40.135253 kubelet[2874]: E1112 20:50:40.134518 2874 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:50:40.151657 kubelet[2874]: W1112 20:50:40.151322 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:40.151657 kubelet[2874]: E1112 20:50:40.151615 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:40.160531 kubelet[2874]: I1112 20:50:40.160485 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:40.160956 kubelet[2874]: E1112 20:50:40.160847 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.222:6443/api/v1/nodes\": dial tcp 172.31.18.222:6443: connect: connection refused" node="ip-172-31-18-222" Nov 12 20:50:40.169075 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:50:40.208251 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:50:40.224778 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:50:40.235379 kubelet[2874]: E1112 20:50:40.235304 2874 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:50:40.238748 kubelet[2874]: I1112 20:50:40.238003 2874 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:50:40.238748 kubelet[2874]: I1112 20:50:40.238208 2874 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:50:40.238748 kubelet[2874]: I1112 20:50:40.238313 2874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:50:40.244813 kubelet[2874]: E1112 20:50:40.244772 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": dial tcp 172.31.18.222:6443: connect: connection refused" interval="400ms" Nov 12 20:50:40.246142 kubelet[2874]: E1112 20:50:40.246118 2874 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-222\" not found" Nov 12 20:50:40.363235 kubelet[2874]: I1112 20:50:40.363198 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:40.363791 kubelet[2874]: E1112 20:50:40.363662 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.222:6443/api/v1/nodes\": dial tcp 172.31.18.222:6443: connect: connection refused" node="ip-172-31-18-222" Nov 12 20:50:40.436417 kubelet[2874]: I1112 20:50:40.436260 2874 topology_manager.go:215] "Topology Admit Handler" podUID="7c92378cca9e25a5505a051499930e38" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-222" Nov 12 20:50:40.438498 kubelet[2874]: I1112 20:50:40.438463 2874 topology_manager.go:215] "Topology Admit Handler" podUID="b50458b7192344c8d21ed8dc69ac9f6b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.440758 kubelet[2874]: I1112 20:50:40.440517 2874 topology_manager.go:215] "Topology Admit Handler" podUID="d3f1e6cef35001581f234d430af14ff8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-222" Nov 12 20:50:40.448102 kubelet[2874]: I1112 20:50:40.447952 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.448102 kubelet[2874]: I1112 20:50:40.447994 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.448102 kubelet[2874]: I1112 20:50:40.448022 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.448361 kubelet[2874]: I1112 20:50:40.448331 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.448604 kubelet[2874]: I1112 20:50:40.448375 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-ca-certs\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:40.448604 kubelet[2874]: I1112 20:50:40.448533 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:40.448604 kubelet[2874]: I1112 20:50:40.448576 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:40.448604 kubelet[2874]: I1112 20:50:40.448603 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:40.450207 systemd[1]: Created slice kubepods-burstable-pod7c92378cca9e25a5505a051499930e38.slice - libcontainer container kubepods-burstable-pod7c92378cca9e25a5505a051499930e38.slice. Nov 12 20:50:40.464740 systemd[1]: Created slice kubepods-burstable-podb50458b7192344c8d21ed8dc69ac9f6b.slice - libcontainer container kubepods-burstable-podb50458b7192344c8d21ed8dc69ac9f6b.slice. Nov 12 20:50:40.471939 systemd[1]: Created slice kubepods-burstable-podd3f1e6cef35001581f234d430af14ff8.slice - libcontainer container kubepods-burstable-podd3f1e6cef35001581f234d430af14ff8.slice. Nov 12 20:50:40.537666 kubelet[2874]: E1112 20:50:40.537342 2874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.222:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-222.180753b9fa9e1773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-222,UID:ip-172-31-18-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-222,},FirstTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,LastTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-222,}" Nov 12 20:50:40.549434 kubelet[2874]: I1112 20:50:40.549175 2874 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f1e6cef35001581f234d430af14ff8-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-222\" (UID: \"d3f1e6cef35001581f234d430af14ff8\") " pod="kube-system/kube-scheduler-ip-172-31-18-222" Nov 12 20:50:40.647106 kubelet[2874]: E1112 20:50:40.647032 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": dial tcp 172.31.18.222:6443: connect: connection refused" interval="800ms" Nov 12 20:50:40.763048 containerd[1962]: time="2024-11-12T20:50:40.762923355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-222,Uid:7c92378cca9e25a5505a051499930e38,Namespace:kube-system,Attempt:0,}" Nov 12 20:50:40.765665 kubelet[2874]: I1112 20:50:40.765469 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:40.766162 kubelet[2874]: E1112 20:50:40.766130 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.222:6443/api/v1/nodes\": dial tcp 172.31.18.222:6443: connect: connection refused" node="ip-172-31-18-222" Nov 12 20:50:40.772558 containerd[1962]: time="2024-11-12T20:50:40.772505883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-222,Uid:b50458b7192344c8d21ed8dc69ac9f6b,Namespace:kube-system,Attempt:0,}" Nov 12 20:50:40.775870 containerd[1962]: time="2024-11-12T20:50:40.775833815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-222,Uid:d3f1e6cef35001581f234d430af14ff8,Namespace:kube-system,Attempt:0,}" Nov 12 20:50:40.952078 kubelet[2874]: W1112 20:50:40.952010 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:40.952078 kubelet[2874]: E1112 20:50:40.952082 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.117181 update_engine[1948]: I20241112 20:50:41.117107 1948 update_attempter.cc:509] Updating boot flags... Nov 12 20:50:41.143920 kubelet[2874]: W1112 20:50:41.143808 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-222&limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.143920 kubelet[2874]: E1112 20:50:41.143894 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-222&limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.186100 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2919) Nov 12 20:50:41.309380 kubelet[2874]: W1112 20:50:41.308014 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.311925 kubelet[2874]: E1112 20:50:41.311816 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.356923 kubelet[2874]: W1112 20:50:41.354225 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.356923 kubelet[2874]: E1112 20:50:41.354332 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.408853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607844758.mount: Deactivated successfully. Nov 12 20:50:41.414982 containerd[1962]: time="2024-11-12T20:50:41.414932996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:50:41.423375 containerd[1962]: time="2024-11-12T20:50:41.423280589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:50:41.427108 containerd[1962]: time="2024-11-12T20:50:41.425459466Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:50:41.430213 containerd[1962]: time="2024-11-12T20:50:41.429623006Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:50:41.432278 containerd[1962]: time="2024-11-12T20:50:41.432212663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:50:41.433612 containerd[1962]: time="2024-11-12T20:50:41.433422595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:50:41.434612 containerd[1962]: time="2024-11-12T20:50:41.434464397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:50:41.442652 containerd[1962]: time="2024-11-12T20:50:41.442113412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:50:41.448630 kubelet[2874]: E1112 20:50:41.448553 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": dial tcp 172.31.18.222:6443: connect: connection refused" interval="1.6s" Nov 12 20:50:41.449147 containerd[1962]: time="2024-11-12T20:50:41.448949162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.030476ms" Nov 12 20:50:41.463936 containerd[1962]: time="2024-11-12T20:50:41.462453957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.432904ms" Nov 12 20:50:41.468126 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2924) Nov 12 20:50:41.481302 containerd[1962]: time="2024-11-12T20:50:41.480944464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.349126ms" Nov 12 20:50:41.570525 kubelet[2874]: I1112 20:50:41.570093 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:41.570525 kubelet[2874]: E1112 20:50:41.570486 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.222:6443/api/v1/nodes\": dial tcp 172.31.18.222:6443: connect: connection refused" node="ip-172-31-18-222" Nov 12 20:50:41.848104 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2924) Nov 12 20:50:41.891168 containerd[1962]: time="2024-11-12T20:50:41.889668280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:41.891168 containerd[1962]: time="2024-11-12T20:50:41.889756406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:41.891168 containerd[1962]: time="2024-11-12T20:50:41.889781491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.891168 containerd[1962]: time="2024-11-12T20:50:41.889869615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.912170 containerd[1962]: time="2024-11-12T20:50:41.910751594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:41.912170 containerd[1962]: time="2024-11-12T20:50:41.911528073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:41.913059 containerd[1962]: time="2024-11-12T20:50:41.912045069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.919056 containerd[1962]: time="2024-11-12T20:50:41.916816513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.940942 containerd[1962]: time="2024-11-12T20:50:41.940818604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:50:41.941157 containerd[1962]: time="2024-11-12T20:50:41.941009981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:50:41.941157 containerd[1962]: time="2024-11-12T20:50:41.941033362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.944136 containerd[1962]: time="2024-11-12T20:50:41.941283186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:50:41.974437 kubelet[2874]: E1112 20:50:41.974348 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:41.978655 systemd[1]: Started cri-containerd-cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975.scope - libcontainer container cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975. Nov 12 20:50:42.082757 systemd[1]: Started cri-containerd-ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149.scope - libcontainer container ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149. Nov 12 20:50:42.135604 systemd[1]: Started cri-containerd-6933de4fecee79143436b626a0de6792af0bcd45112d764e194e35b391e873ae.scope - libcontainer container 6933de4fecee79143436b626a0de6792af0bcd45112d764e194e35b391e873ae. Nov 12 20:50:42.252962 containerd[1962]: time="2024-11-12T20:50:42.252916401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-222,Uid:d3f1e6cef35001581f234d430af14ff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975\"" Nov 12 20:50:42.264096 containerd[1962]: time="2024-11-12T20:50:42.263943824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-222,Uid:b50458b7192344c8d21ed8dc69ac9f6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149\"" Nov 12 20:50:42.271011 containerd[1962]: time="2024-11-12T20:50:42.269053279Z" level=info msg="CreateContainer within sandbox \"ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:50:42.271011 containerd[1962]: time="2024-11-12T20:50:42.269419794Z" level=info msg="CreateContainer within sandbox \"cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:50:42.297968 containerd[1962]: time="2024-11-12T20:50:42.297908897Z" level=info msg="CreateContainer within sandbox \"ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29\"" Nov 12 20:50:42.298901 containerd[1962]: time="2024-11-12T20:50:42.298866760Z" level=info msg="StartContainer for \"d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29\"" Nov 12 20:50:42.310816 containerd[1962]: time="2024-11-12T20:50:42.310764617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-222,Uid:7c92378cca9e25a5505a051499930e38,Namespace:kube-system,Attempt:0,} returns sandbox id \"6933de4fecee79143436b626a0de6792af0bcd45112d764e194e35b391e873ae\"" Nov 12 20:50:42.313759 containerd[1962]: time="2024-11-12T20:50:42.313717589Z" level=info msg="CreateContainer within sandbox \"cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238\"" Nov 12 20:50:42.314967 containerd[1962]: time="2024-11-12T20:50:42.314830681Z" level=info msg="StartContainer for \"ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238\"" Nov 12 20:50:42.316089 containerd[1962]: time="2024-11-12T20:50:42.314846986Z" level=info msg="CreateContainer within sandbox \"6933de4fecee79143436b626a0de6792af0bcd45112d764e194e35b391e873ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:50:42.345725 containerd[1962]: time="2024-11-12T20:50:42.345676430Z" level=info msg="CreateContainer within sandbox \"6933de4fecee79143436b626a0de6792af0bcd45112d764e194e35b391e873ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904\"" Nov 12 20:50:42.348026 containerd[1962]: time="2024-11-12T20:50:42.347991370Z" level=info msg="StartContainer for \"54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904\"" Nov 12 20:50:42.354400 systemd[1]: Started cri-containerd-d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29.scope - libcontainer container d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29. Nov 12 20:50:42.368396 systemd[1]: Started cri-containerd-ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238.scope - libcontainer container ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238. Nov 12 20:50:42.429042 systemd[1]: run-containerd-runc-k8s.io-54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904-runc.EwEyoU.mount: Deactivated successfully. Nov 12 20:50:42.449320 systemd[1]: Started cri-containerd-54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904.scope - libcontainer container 54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904. Nov 12 20:50:42.519664 containerd[1962]: time="2024-11-12T20:50:42.519553252Z" level=info msg="StartContainer for \"d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29\" returns successfully" Nov 12 20:50:42.561201 containerd[1962]: time="2024-11-12T20:50:42.561160883Z" level=info msg="StartContainer for \"ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238\" returns successfully" Nov 12 20:50:42.581565 containerd[1962]: time="2024-11-12T20:50:42.581416116Z" level=info msg="StartContainer for \"54837748f6fd8ef9422efda2000b3c9200e3bed6cabdba2f59b01cddb4e84904\" returns successfully" Nov 12 20:50:42.622717 kubelet[2874]: W1112 20:50:42.622620 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:42.622717 kubelet[2874]: E1112 20:50:42.622720 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:42.975937 kubelet[2874]: W1112 20:50:42.975855 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:42.975937 kubelet[2874]: E1112 20:50:42.975944 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:43.050343 kubelet[2874]: E1112 20:50:43.050238 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": dial tcp 172.31.18.222:6443: connect: connection refused" interval="3.2s" Nov 12 20:50:43.112560 kubelet[2874]: W1112 20:50:43.112481 2874 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:43.112560 kubelet[2874]: E1112 20:50:43.112569 2874 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.222:6443: connect: connection refused Nov 12 20:50:43.174312 kubelet[2874]: I1112 20:50:43.174273 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:43.174634 kubelet[2874]: E1112 20:50:43.174607 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.222:6443/api/v1/nodes\": dial tcp 172.31.18.222:6443: connect: connection refused" node="ip-172-31-18-222" Nov 12 20:50:45.877759 kubelet[2874]: E1112 20:50:45.877683 2874 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-222" not found Nov 12 20:50:45.976544 kubelet[2874]: I1112 20:50:45.976495 2874 apiserver.go:52] "Watching apiserver" Nov 12 20:50:46.040488 kubelet[2874]: I1112 20:50:46.040446 2874 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 20:50:46.236906 kubelet[2874]: E1112 20:50:46.236793 2874 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-18-222" not found Nov 12 20:50:46.263958 kubelet[2874]: E1112 20:50:46.263842 2874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-222\" not found" node="ip-172-31-18-222" Nov 12 20:50:46.380393 kubelet[2874]: I1112 20:50:46.380013 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:46.395837 kubelet[2874]: I1112 20:50:46.395800 2874 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-222" Nov 12 20:50:47.763884 systemd[1]: Reloading requested from client PID 3422 ('systemctl') (unit session-9.scope)... Nov 12 20:50:47.763905 systemd[1]: Reloading... Nov 12 20:50:48.025395 zram_generator::config[3462]: No configuration found. Nov 12 20:50:48.285317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:50:48.505249 systemd[1]: Reloading finished in 740 ms. Nov 12 20:50:48.550116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:48.551508 kubelet[2874]: E1112 20:50:48.550978 2874 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-18-222.180753b9fa9e1773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-222,UID:ip-172-31-18-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-222,},FirstTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,LastTimestamp:2024-11-12 20:50:39.985964915 +0000 UTC m=+0.698641022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-222,}" Nov 12 20:50:48.551508 kubelet[2874]: I1112 20:50:48.551264 2874 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:50:48.560634 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:50:48.561176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:48.571589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:50:48.959286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:50:48.970908 (kubelet)[3519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:50:49.078857 kubelet[3519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:50:49.078857 kubelet[3519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:50:49.078857 kubelet[3519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:50:49.081266 kubelet[3519]: I1112 20:50:49.078944 3519 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:50:49.085785 kubelet[3519]: I1112 20:50:49.085749 3519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:50:49.085785 kubelet[3519]: I1112 20:50:49.085776 3519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:50:49.086043 kubelet[3519]: I1112 20:50:49.086022 3519 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:50:49.092112 kubelet[3519]: I1112 20:50:49.090824 3519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:50:49.092691 kubelet[3519]: I1112 20:50:49.092661 3519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:50:49.136464 kubelet[3519]: I1112 20:50:49.136428 3519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:50:49.136913 kubelet[3519]: I1112 20:50:49.136884 3519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:50:49.142519 kubelet[3519]: I1112 20:50:49.137014 3519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:50:49.142867 kubelet[3519]: I1112 20:50:49.142545 3519 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:50:49.142867 kubelet[3519]: I1112 20:50:49.142563 3519 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:50:49.150280 kubelet[3519]: I1112 20:50:49.149881 3519 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:50:49.150280 kubelet[3519]: I1112 20:50:49.150050 3519 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:50:49.150280 kubelet[3519]: I1112 20:50:49.150085 3519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:50:49.150280 kubelet[3519]: I1112 20:50:49.150122 3519 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:50:49.150280 kubelet[3519]: I1112 20:50:49.150147 3519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:50:49.163420 kubelet[3519]: I1112 20:50:49.162657 3519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:50:49.168460 kubelet[3519]: I1112 20:50:49.168427 3519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:50:49.169245 kubelet[3519]: I1112 20:50:49.169221 3519 server.go:1264] "Started kubelet" Nov 12 20:50:49.172244 kubelet[3519]: I1112 20:50:49.172218 3519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:50:49.180486 kubelet[3519]: I1112 20:50:49.180433 3519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:50:49.187161 kubelet[3519]: I1112 20:50:49.186250 3519 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:50:49.190219 kubelet[3519]: I1112 20:50:49.190186 3519 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:50:49.206616 kubelet[3519]: I1112 20:50:49.190366 3519 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:50:49.208193 kubelet[3519]: I1112 20:50:49.208160 3519 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:50:49.208387 kubelet[3519]: I1112 20:50:49.196538 3519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:50:49.208564 kubelet[3519]: I1112 20:50:49.208544 3519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:50:49.214592 kubelet[3519]: I1112 20:50:49.214040 3519 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:50:49.215828 kubelet[3519]: I1112 20:50:49.215791 3519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:50:49.224311 kubelet[3519]: I1112 20:50:49.224279 3519 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:50:49.224959 kubelet[3519]: E1112 20:50:49.224937 3519 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:50:49.251290 kubelet[3519]: I1112 20:50:49.251251 3519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:50:49.261139 kubelet[3519]: I1112 20:50:49.260798 3519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:50:49.261139 kubelet[3519]: I1112 20:50:49.260834 3519 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:50:49.261139 kubelet[3519]: I1112 20:50:49.260854 3519 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:50:49.261139 kubelet[3519]: E1112 20:50:49.260912 3519 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:50:49.303202 kubelet[3519]: I1112 20:50:49.302663 3519 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-222" Nov 12 20:50:49.320163 kubelet[3519]: I1112 20:50:49.319697 3519 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-222" Nov 12 20:50:49.320163 kubelet[3519]: I1112 20:50:49.319775 3519 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-222" Nov 12 20:50:49.356987 kubelet[3519]: I1112 20:50:49.356865 3519 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:50:49.356987 kubelet[3519]: I1112 20:50:49.356883 3519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:50:49.356987 kubelet[3519]: I1112 20:50:49.356905 3519 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:50:49.357758 kubelet[3519]: I1112 20:50:49.357570 3519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:50:49.357758 kubelet[3519]: I1112 20:50:49.357586 3519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:50:49.357758 kubelet[3519]: I1112 20:50:49.357616 3519 policy_none.go:49] "None policy: Start" Nov 12 20:50:49.360258 kubelet[3519]: I1112 20:50:49.359338 3519 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:50:49.360258 kubelet[3519]: I1112 20:50:49.359358 3519 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:50:49.360258 kubelet[3519]: I1112 20:50:49.359511 3519 state_mem.go:75] "Updated machine memory state" Nov 12 20:50:49.361638 kubelet[3519]: E1112 20:50:49.361622 3519 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:50:49.366360 kubelet[3519]: I1112 20:50:49.366333 3519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:50:49.366891 kubelet[3519]: I1112 20:50:49.366788 3519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:50:49.370880 kubelet[3519]: I1112 20:50:49.369397 3519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:50:49.565082 kubelet[3519]: I1112 20:50:49.563783 3519 topology_manager.go:215] "Topology Admit Handler" podUID="7c92378cca9e25a5505a051499930e38" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-222" Nov 12 20:50:49.565082 kubelet[3519]: I1112 20:50:49.564119 3519 topology_manager.go:215] "Topology Admit Handler" podUID="b50458b7192344c8d21ed8dc69ac9f6b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.565082 kubelet[3519]: I1112 20:50:49.564694 3519 topology_manager.go:215] "Topology Admit Handler" podUID="d3f1e6cef35001581f234d430af14ff8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-222" Nov 12 20:50:49.612176 kubelet[3519]: I1112 20:50:49.611891 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-ca-certs\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:49.612176 kubelet[3519]: I1112 20:50:49.611938 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:49.612176 kubelet[3519]: I1112 20:50:49.611963 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.612176 kubelet[3519]: I1112 20:50:49.611991 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c92378cca9e25a5505a051499930e38-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-222\" (UID: \"7c92378cca9e25a5505a051499930e38\") " pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:49.612176 kubelet[3519]: I1112 20:50:49.612013 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.612494 kubelet[3519]: I1112 20:50:49.612033 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.612494 kubelet[3519]: I1112 20:50:49.612059 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.612494 kubelet[3519]: I1112 20:50:49.612095 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b50458b7192344c8d21ed8dc69ac9f6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-222\" (UID: \"b50458b7192344c8d21ed8dc69ac9f6b\") " pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:49.612494 kubelet[3519]: I1112 20:50:49.612127 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f1e6cef35001581f234d430af14ff8-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-222\" (UID: \"d3f1e6cef35001581f234d430af14ff8\") " pod="kube-system/kube-scheduler-ip-172-31-18-222" Nov 12 20:50:49.621212 kubelet[3519]: E1112 20:50:49.620905 3519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-222\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-222" Nov 12 20:50:50.160386 kubelet[3519]: I1112 20:50:50.160343 3519 apiserver.go:52] "Watching apiserver" Nov 12 20:50:50.207653 kubelet[3519]: I1112 20:50:50.207559 3519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 20:50:50.305619 kubelet[3519]: E1112 20:50:50.305576 3519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-222\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-222" Nov 12 20:50:50.315788 kubelet[3519]: I1112 20:50:50.315718 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-222" podStartSLOduration=1.315697402 podStartE2EDuration="1.315697402s" podCreationTimestamp="2024-11-12 20:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:50:50.303369437 +0000 UTC m=+1.323222970" watchObservedRunningTime="2024-11-12 20:50:50.315697402 +0000 UTC m=+1.335550915" Nov 12 20:50:50.389567 kubelet[3519]: I1112 20:50:50.388252 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-222" podStartSLOduration=1.3874890020000001 podStartE2EDuration="1.387489002s" podCreationTimestamp="2024-11-12 20:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:50:50.316514393 +0000 UTC m=+1.336367924" watchObservedRunningTime="2024-11-12 20:50:50.387489002 +0000 UTC m=+1.407342534" Nov 12 20:50:50.427855 kubelet[3519]: I1112 20:50:50.427710 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-222" podStartSLOduration=3.427676746 podStartE2EDuration="3.427676746s" podCreationTimestamp="2024-11-12 20:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:50:50.393254104 +0000 UTC m=+1.413107637" watchObservedRunningTime="2024-11-12 20:50:50.427676746 +0000 UTC m=+1.447530472" Nov 12 20:50:53.363536 sudo[2319]: pam_unix(sudo:session): session closed for user root Nov 12 20:50:53.390534 sshd[2316]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:53.395200 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:50:53.395203 systemd[1]: sshd@8-172.31.18.222:22-139.178.89.65:55866.service: Deactivated successfully. Nov 12 20:50:53.398145 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:50:53.398349 systemd[1]: session-9.scope: Consumed 4.976s CPU time, 186.2M memory peak, 0B memory swap peak. Nov 12 20:50:53.399270 systemd-logind[1947]: Removed session 9. Nov 12 20:51:02.019527 kubelet[3519]: I1112 20:51:02.019482 3519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:51:02.030411 containerd[1962]: time="2024-11-12T20:51:02.030311861Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:51:02.034604 kubelet[3519]: I1112 20:51:02.034566 3519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:51:02.756329 kubelet[3519]: I1112 20:51:02.753042 3519 topology_manager.go:215] "Topology Admit Handler" podUID="f7d3e84d-add6-437a-a714-6e249ca3d5f0" podNamespace="kube-system" podName="kube-proxy-zqhqf" Nov 12 20:51:02.803717 systemd[1]: Created slice kubepods-besteffort-podf7d3e84d_add6_437a_a714_6e249ca3d5f0.slice - libcontainer container kubepods-besteffort-podf7d3e84d_add6_437a_a714_6e249ca3d5f0.slice. Nov 12 20:51:02.926475 kubelet[3519]: I1112 20:51:02.926372 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7d3e84d-add6-437a-a714-6e249ca3d5f0-kube-proxy\") pod \"kube-proxy-zqhqf\" (UID: \"f7d3e84d-add6-437a-a714-6e249ca3d5f0\") " pod="kube-system/kube-proxy-zqhqf" Nov 12 20:51:02.926475 kubelet[3519]: I1112 20:51:02.926465 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7d3e84d-add6-437a-a714-6e249ca3d5f0-xtables-lock\") pod \"kube-proxy-zqhqf\" (UID: \"f7d3e84d-add6-437a-a714-6e249ca3d5f0\") " pod="kube-system/kube-proxy-zqhqf" Nov 12 20:51:02.926742 kubelet[3519]: I1112 20:51:02.926491 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7d3e84d-add6-437a-a714-6e249ca3d5f0-lib-modules\") pod \"kube-proxy-zqhqf\" (UID: \"f7d3e84d-add6-437a-a714-6e249ca3d5f0\") " pod="kube-system/kube-proxy-zqhqf" Nov 12 20:51:02.926742 kubelet[3519]: I1112 20:51:02.926520 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxh8\" (UniqueName: \"kubernetes.io/projected/f7d3e84d-add6-437a-a714-6e249ca3d5f0-kube-api-access-rnxh8\") pod \"kube-proxy-zqhqf\" (UID: \"f7d3e84d-add6-437a-a714-6e249ca3d5f0\") " pod="kube-system/kube-proxy-zqhqf" Nov 12 20:51:02.951440 kubelet[3519]: I1112 20:51:02.950252 3519 topology_manager.go:215] "Topology Admit Handler" podUID="66558590-0828-4e59-9f27-96d1cf5ae79d" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-znm6w" Nov 12 20:51:02.975456 systemd[1]: Created slice kubepods-besteffort-pod66558590_0828_4e59_9f27_96d1cf5ae79d.slice - libcontainer container kubepods-besteffort-pod66558590_0828_4e59_9f27_96d1cf5ae79d.slice. Nov 12 20:51:03.003685 kubelet[3519]: W1112 20:51:03.002378 3519 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-222" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-222' and this object Nov 12 20:51:03.003685 kubelet[3519]: E1112 20:51:03.002423 3519 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-222" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-222' and this object Nov 12 20:51:03.006469 kubelet[3519]: W1112 20:51:03.004731 3519 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-18-222" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-222' and this object Nov 12 20:51:03.006469 kubelet[3519]: E1112 20:51:03.005323 3519 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-18-222" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-18-222' and this object Nov 12 20:51:03.130322 kubelet[3519]: I1112 20:51:03.130260 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66558590-0828-4e59-9f27-96d1cf5ae79d-var-lib-calico\") pod \"tigera-operator-5645cfc98-znm6w\" (UID: \"66558590-0828-4e59-9f27-96d1cf5ae79d\") " pod="tigera-operator/tigera-operator-5645cfc98-znm6w" Nov 12 20:51:03.130322 kubelet[3519]: I1112 20:51:03.130324 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t64f\" (UniqueName: \"kubernetes.io/projected/66558590-0828-4e59-9f27-96d1cf5ae79d-kube-api-access-5t64f\") pod \"tigera-operator-5645cfc98-znm6w\" (UID: \"66558590-0828-4e59-9f27-96d1cf5ae79d\") " pod="tigera-operator/tigera-operator-5645cfc98-znm6w" Nov 12 20:51:03.134438 containerd[1962]: time="2024-11-12T20:51:03.134129358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zqhqf,Uid:f7d3e84d-add6-437a-a714-6e249ca3d5f0,Namespace:kube-system,Attempt:0,}" Nov 12 20:51:03.178738 containerd[1962]: time="2024-11-12T20:51:03.178246001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:51:03.178738 containerd[1962]: time="2024-11-12T20:51:03.178399269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:51:03.178738 containerd[1962]: time="2024-11-12T20:51:03.178431594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:03.178738 containerd[1962]: time="2024-11-12T20:51:03.178627683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:03.212748 systemd[1]: run-containerd-runc-k8s.io-007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3-runc.yPnVy1.mount: Deactivated successfully. Nov 12 20:51:03.225682 systemd[1]: Started cri-containerd-007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3.scope - libcontainer container 007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3. Nov 12 20:51:03.274663 containerd[1962]: time="2024-11-12T20:51:03.273859996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zqhqf,Uid:f7d3e84d-add6-437a-a714-6e249ca3d5f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3\"" Nov 12 20:51:03.298041 containerd[1962]: time="2024-11-12T20:51:03.297985233Z" level=info msg="CreateContainer within sandbox \"007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:51:03.325777 containerd[1962]: time="2024-11-12T20:51:03.325697413Z" level=info msg="CreateContainer within sandbox \"007ea638352b665bb6a09e04cfd61d9759793d868350aaa365a381e0c16d0ba3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b5d6f5ee921120a979a7976aea888a7a7258aed11544db0c9ded72e8fa5d24a\"" Nov 12 20:51:03.327433 containerd[1962]: time="2024-11-12T20:51:03.327378598Z" level=info msg="StartContainer for \"9b5d6f5ee921120a979a7976aea888a7a7258aed11544db0c9ded72e8fa5d24a\"" Nov 12 20:51:03.379180 systemd[1]: Started cri-containerd-9b5d6f5ee921120a979a7976aea888a7a7258aed11544db0c9ded72e8fa5d24a.scope - libcontainer container 9b5d6f5ee921120a979a7976aea888a7a7258aed11544db0c9ded72e8fa5d24a. Nov 12 20:51:03.429285 containerd[1962]: time="2024-11-12T20:51:03.428591409Z" level=info msg="StartContainer for \"9b5d6f5ee921120a979a7976aea888a7a7258aed11544db0c9ded72e8fa5d24a\" returns successfully" Nov 12 20:51:04.244123 kubelet[3519]: E1112 20:51:04.243968 3519 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 12 20:51:04.244694 kubelet[3519]: E1112 20:51:04.244154 3519 projected.go:200] Error preparing data for projected volume kube-api-access-5t64f for pod tigera-operator/tigera-operator-5645cfc98-znm6w: failed to sync configmap cache: timed out waiting for the condition Nov 12 20:51:04.244694 kubelet[3519]: E1112 20:51:04.244263 3519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66558590-0828-4e59-9f27-96d1cf5ae79d-kube-api-access-5t64f podName:66558590-0828-4e59-9f27-96d1cf5ae79d nodeName:}" failed. No retries permitted until 2024-11-12 20:51:04.744234928 +0000 UTC m=+15.764088441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5t64f" (UniqueName: "kubernetes.io/projected/66558590-0828-4e59-9f27-96d1cf5ae79d-kube-api-access-5t64f") pod "tigera-operator-5645cfc98-znm6w" (UID: "66558590-0828-4e59-9f27-96d1cf5ae79d") : failed to sync configmap cache: timed out waiting for the condition Nov 12 20:51:04.369565 kubelet[3519]: I1112 20:51:04.369458 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zqhqf" podStartSLOduration=2.369433509 podStartE2EDuration="2.369433509s" podCreationTimestamp="2024-11-12 20:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:51:04.368789067 +0000 UTC m=+15.388642602" watchObservedRunningTime="2024-11-12 20:51:04.369433509 +0000 UTC m=+15.389287054" Nov 12 20:51:04.796057 containerd[1962]: time="2024-11-12T20:51:04.796015356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-znm6w,Uid:66558590-0828-4e59-9f27-96d1cf5ae79d,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:51:04.840615 containerd[1962]: time="2024-11-12T20:51:04.840045627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:51:04.840615 containerd[1962]: time="2024-11-12T20:51:04.840198777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:51:04.840615 containerd[1962]: time="2024-11-12T20:51:04.840223063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:04.840615 containerd[1962]: time="2024-11-12T20:51:04.840327675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:04.881296 systemd[1]: Started cri-containerd-5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407.scope - libcontainer container 5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407. Nov 12 20:51:04.931489 containerd[1962]: time="2024-11-12T20:51:04.931440235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-znm6w,Uid:66558590-0828-4e59-9f27-96d1cf5ae79d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407\"" Nov 12 20:51:04.948565 containerd[1962]: time="2024-11-12T20:51:04.948514117Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:51:08.806816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991520789.mount: Deactivated successfully. Nov 12 20:51:09.665393 containerd[1962]: time="2024-11-12T20:51:09.665337390Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:09.666761 containerd[1962]: time="2024-11-12T20:51:09.666602489Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763351" Nov 12 20:51:09.669609 containerd[1962]: time="2024-11-12T20:51:09.668363742Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:09.671350 containerd[1962]: time="2024-11-12T20:51:09.671311777Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:09.672215 containerd[1962]: time="2024-11-12T20:51:09.672178975Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 4.723620867s" Nov 12 20:51:09.672385 containerd[1962]: time="2024-11-12T20:51:09.672348502Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:51:09.683456 containerd[1962]: time="2024-11-12T20:51:09.682596311Z" level=info msg="CreateContainer within sandbox \"5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:51:09.709992 containerd[1962]: time="2024-11-12T20:51:09.709945915Z" level=info msg="CreateContainer within sandbox \"5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb\"" Nov 12 20:51:09.710169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339442208.mount: Deactivated successfully. Nov 12 20:51:09.711756 containerd[1962]: time="2024-11-12T20:51:09.711715492Z" level=info msg="StartContainer for \"1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb\"" Nov 12 20:51:09.751328 systemd[1]: Started cri-containerd-1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb.scope - libcontainer container 1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb. Nov 12 20:51:09.786963 containerd[1962]: time="2024-11-12T20:51:09.786917566Z" level=info msg="StartContainer for \"1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb\" returns successfully" Nov 12 20:51:10.389958 kubelet[3519]: I1112 20:51:10.389881 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-znm6w" podStartSLOduration=3.648420453 podStartE2EDuration="8.389859485s" podCreationTimestamp="2024-11-12 20:51:02 +0000 UTC" firstStartedPulling="2024-11-12 20:51:04.938790401 +0000 UTC m=+15.958643913" lastFinishedPulling="2024-11-12 20:51:09.680229426 +0000 UTC m=+20.700082945" observedRunningTime="2024-11-12 20:51:10.3877783 +0000 UTC m=+21.407631832" watchObservedRunningTime="2024-11-12 20:51:10.389859485 +0000 UTC m=+21.409713019" Nov 12 20:51:13.318133 kubelet[3519]: I1112 20:51:13.317952 3519 topology_manager.go:215] "Topology Admit Handler" podUID="484376f9-8a7e-4414-90d6-5322b126da27" podNamespace="calico-system" podName="calico-typha-64fbd4c589-p4zqz" Nov 12 20:51:13.365674 systemd[1]: Created slice kubepods-besteffort-pod484376f9_8a7e_4414_90d6_5322b126da27.slice - libcontainer container kubepods-besteffort-pod484376f9_8a7e_4414_90d6_5322b126da27.slice. Nov 12 20:51:13.440879 kubelet[3519]: I1112 20:51:13.440000 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/484376f9-8a7e-4414-90d6-5322b126da27-typha-certs\") pod \"calico-typha-64fbd4c589-p4zqz\" (UID: \"484376f9-8a7e-4414-90d6-5322b126da27\") " pod="calico-system/calico-typha-64fbd4c589-p4zqz" Nov 12 20:51:13.441262 kubelet[3519]: I1112 20:51:13.441150 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484376f9-8a7e-4414-90d6-5322b126da27-tigera-ca-bundle\") pod \"calico-typha-64fbd4c589-p4zqz\" (UID: \"484376f9-8a7e-4414-90d6-5322b126da27\") " pod="calico-system/calico-typha-64fbd4c589-p4zqz" Nov 12 20:51:13.441501 kubelet[3519]: I1112 20:51:13.441393 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzztv\" (UniqueName: \"kubernetes.io/projected/484376f9-8a7e-4414-90d6-5322b126da27-kube-api-access-kzztv\") pod \"calico-typha-64fbd4c589-p4zqz\" (UID: \"484376f9-8a7e-4414-90d6-5322b126da27\") " pod="calico-system/calico-typha-64fbd4c589-p4zqz" Nov 12 20:51:13.499273 kubelet[3519]: I1112 20:51:13.499215 3519 topology_manager.go:215] "Topology Admit Handler" podUID="42215328-2581-4c3c-973d-7929b354c338" podNamespace="calico-system" podName="calico-node-ksgbj" Nov 12 20:51:13.525159 systemd[1]: Created slice kubepods-besteffort-pod42215328_2581_4c3c_973d_7929b354c338.slice - libcontainer container kubepods-besteffort-pod42215328_2581_4c3c_973d_7929b354c338.slice. Nov 12 20:51:13.545090 kubelet[3519]: I1112 20:51:13.542455 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-policysync\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545090 kubelet[3519]: I1112 20:51:13.542510 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42215328-2581-4c3c-973d-7929b354c338-tigera-ca-bundle\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545090 kubelet[3519]: I1112 20:51:13.542540 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42215328-2581-4c3c-973d-7929b354c338-node-certs\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545090 kubelet[3519]: I1112 20:51:13.542589 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-log-dir\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545090 kubelet[3519]: I1112 20:51:13.542612 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-run-calico\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545434 kubelet[3519]: I1112 20:51:13.542653 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-lib-modules\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545434 kubelet[3519]: I1112 20:51:13.542677 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-flexvol-driver-host\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545434 kubelet[3519]: I1112 20:51:13.542705 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wscts\" (UniqueName: \"kubernetes.io/projected/42215328-2581-4c3c-973d-7929b354c338-kube-api-access-wscts\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545434 kubelet[3519]: I1112 20:51:13.542727 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-xtables-lock\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545434 kubelet[3519]: I1112 20:51:13.542748 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-lib-calico\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545651 kubelet[3519]: I1112 20:51:13.542870 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-bin-dir\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.545651 kubelet[3519]: I1112 20:51:13.542895 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-net-dir\") pod \"calico-node-ksgbj\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " pod="calico-system/calico-node-ksgbj" Nov 12 20:51:13.673701 kubelet[3519]: E1112 20:51:13.673421 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.673701 kubelet[3519]: W1112 20:51:13.673452 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.673701 kubelet[3519]: E1112 20:51:13.673488 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.696281 kubelet[3519]: E1112 20:51:13.695204 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.696281 kubelet[3519]: W1112 20:51:13.695327 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.696281 kubelet[3519]: E1112 20:51:13.695356 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.700808 containerd[1962]: time="2024-11-12T20:51:13.700767946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fbd4c589-p4zqz,Uid:484376f9-8a7e-4414-90d6-5322b126da27,Namespace:calico-system,Attempt:0,}" Nov 12 20:51:13.796448 containerd[1962]: time="2024-11-12T20:51:13.795663257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:51:13.796448 containerd[1962]: time="2024-11-12T20:51:13.795810249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:51:13.796448 containerd[1962]: time="2024-11-12T20:51:13.795838289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:13.796448 containerd[1962]: time="2024-11-12T20:51:13.796042360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:13.810249 kubelet[3519]: I1112 20:51:13.810197 3519 topology_manager.go:215] "Topology Admit Handler" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" podNamespace="calico-system" podName="csi-node-driver-xw4fv" Nov 12 20:51:13.821942 kubelet[3519]: E1112 20:51:13.821630 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:13.830337 kubelet[3519]: E1112 20:51:13.830195 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.831988 kubelet[3519]: W1112 20:51:13.831813 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.831988 kubelet[3519]: E1112 20:51:13.833140 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.835726 kubelet[3519]: E1112 20:51:13.835456 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.835726 kubelet[3519]: W1112 20:51:13.835481 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.835726 kubelet[3519]: E1112 20:51:13.835510 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.835855 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.837164 kubelet[3519]: W1112 20:51:13.835868 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.835882 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.836162 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.837164 kubelet[3519]: W1112 20:51:13.836172 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.836184 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.836505 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.837164 kubelet[3519]: W1112 20:51:13.836515 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.836528 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.837164 kubelet[3519]: E1112 20:51:13.836775 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.837641 kubelet[3519]: W1112 20:51:13.836784 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.837641 kubelet[3519]: E1112 20:51:13.836797 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.837641 kubelet[3519]: E1112 20:51:13.837111 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.837641 kubelet[3519]: W1112 20:51:13.837138 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.837641 kubelet[3519]: E1112 20:51:13.837150 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.839987 kubelet[3519]: E1112 20:51:13.839375 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.839987 kubelet[3519]: W1112 20:51:13.839388 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.839987 kubelet[3519]: E1112 20:51:13.839404 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.840923 containerd[1962]: time="2024-11-12T20:51:13.840504764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksgbj,Uid:42215328-2581-4c3c-973d-7929b354c338,Namespace:calico-system,Attempt:0,}" Nov 12 20:51:13.841347 kubelet[3519]: E1112 20:51:13.841220 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.841347 kubelet[3519]: W1112 20:51:13.841234 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.841347 kubelet[3519]: E1112 20:51:13.841301 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.842623 kubelet[3519]: E1112 20:51:13.841838 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.842623 kubelet[3519]: W1112 20:51:13.841867 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.842623 kubelet[3519]: E1112 20:51:13.841881 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843354 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.844234 kubelet[3519]: W1112 20:51:13.843387 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843405 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843662 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.844234 kubelet[3519]: W1112 20:51:13.843674 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843687 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843953 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.844234 kubelet[3519]: W1112 20:51:13.843964 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.844234 kubelet[3519]: E1112 20:51:13.843976 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.845290 kubelet[3519]: E1112 20:51:13.845261 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.845290 kubelet[3519]: W1112 20:51:13.845280 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.845462 kubelet[3519]: E1112 20:51:13.845295 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.847580 kubelet[3519]: E1112 20:51:13.847561 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.847580 kubelet[3519]: W1112 20:51:13.847580 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.847728 kubelet[3519]: E1112 20:51:13.847597 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.852555 kubelet[3519]: E1112 20:51:13.852526 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.852555 kubelet[3519]: W1112 20:51:13.852552 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.852768 kubelet[3519]: E1112 20:51:13.852578 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.854455 kubelet[3519]: E1112 20:51:13.854225 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.854455 kubelet[3519]: W1112 20:51:13.854278 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.854455 kubelet[3519]: E1112 20:51:13.854302 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.855128 systemd[1]: Started cri-containerd-320588942d1d87d03ff3bc6d5e9ccd9aa18e73a48c24ef18952b6e9ac356a48e.scope - libcontainer container 320588942d1d87d03ff3bc6d5e9ccd9aa18e73a48c24ef18952b6e9ac356a48e. Nov 12 20:51:13.855562 kubelet[3519]: E1112 20:51:13.855277 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.855562 kubelet[3519]: W1112 20:51:13.855302 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.855562 kubelet[3519]: E1112 20:51:13.855322 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.857105 kubelet[3519]: E1112 20:51:13.856218 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.857105 kubelet[3519]: W1112 20:51:13.856252 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.857105 kubelet[3519]: E1112 20:51:13.856269 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.858411 kubelet[3519]: E1112 20:51:13.857636 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.858567 kubelet[3519]: W1112 20:51:13.858548 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.859607 kubelet[3519]: E1112 20:51:13.859384 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.860510 kubelet[3519]: E1112 20:51:13.860488 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.860510 kubelet[3519]: W1112 20:51:13.860508 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.860628 kubelet[3519]: E1112 20:51:13.860526 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.860628 kubelet[3519]: I1112 20:51:13.860558 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/579a082e-68d8-4be8-b0d9-8983607906fe-socket-dir\") pod \"csi-node-driver-xw4fv\" (UID: \"579a082e-68d8-4be8-b0d9-8983607906fe\") " pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:13.862100 kubelet[3519]: E1112 20:51:13.861385 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.862100 kubelet[3519]: W1112 20:51:13.861403 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.862100 kubelet[3519]: E1112 20:51:13.861420 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.862100 kubelet[3519]: I1112 20:51:13.861525 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/579a082e-68d8-4be8-b0d9-8983607906fe-varrun\") pod \"csi-node-driver-xw4fv\" (UID: \"579a082e-68d8-4be8-b0d9-8983607906fe\") " pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:13.862381 kubelet[3519]: E1112 20:51:13.862155 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.862381 kubelet[3519]: W1112 20:51:13.862170 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.862381 kubelet[3519]: E1112 20:51:13.862186 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.862381 kubelet[3519]: I1112 20:51:13.862213 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/579a082e-68d8-4be8-b0d9-8983607906fe-kubelet-dir\") pod \"csi-node-driver-xw4fv\" (UID: \"579a082e-68d8-4be8-b0d9-8983607906fe\") " pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:13.863168 kubelet[3519]: E1112 20:51:13.863143 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.863168 kubelet[3519]: W1112 20:51:13.863168 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.863753 kubelet[3519]: E1112 20:51:13.863726 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.863862 kubelet[3519]: I1112 20:51:13.863768 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/579a082e-68d8-4be8-b0d9-8983607906fe-registration-dir\") pod \"csi-node-driver-xw4fv\" (UID: \"579a082e-68d8-4be8-b0d9-8983607906fe\") " pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:13.864429 kubelet[3519]: E1112 20:51:13.864406 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.864429 kubelet[3519]: W1112 20:51:13.864427 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.864737 kubelet[3519]: E1112 20:51:13.864715 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.865608 kubelet[3519]: I1112 20:51:13.865015 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8chn\" (UniqueName: \"kubernetes.io/projected/579a082e-68d8-4be8-b0d9-8983607906fe-kube-api-access-c8chn\") pod \"csi-node-driver-xw4fv\" (UID: \"579a082e-68d8-4be8-b0d9-8983607906fe\") " pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:13.865608 kubelet[3519]: E1112 20:51:13.865342 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.865608 kubelet[3519]: W1112 20:51:13.865354 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.867265 kubelet[3519]: E1112 20:51:13.867223 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.867265 kubelet[3519]: W1112 20:51:13.867247 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.867916 kubelet[3519]: E1112 20:51:13.867575 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.867916 kubelet[3519]: E1112 20:51:13.867599 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.869287 kubelet[3519]: E1112 20:51:13.869258 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.869287 kubelet[3519]: W1112 20:51:13.869278 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.870057 kubelet[3519]: E1112 20:51:13.870031 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.870519 kubelet[3519]: E1112 20:51:13.870504 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.870667 kubelet[3519]: W1112 20:51:13.870622 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.870723 kubelet[3519]: E1112 20:51:13.870693 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.871742 kubelet[3519]: E1112 20:51:13.871247 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.871742 kubelet[3519]: W1112 20:51:13.871262 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.871742 kubelet[3519]: E1112 20:51:13.871658 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.873211 kubelet[3519]: E1112 20:51:13.873191 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.873211 kubelet[3519]: W1112 20:51:13.873210 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.873403 kubelet[3519]: E1112 20:51:13.873228 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.873527 kubelet[3519]: E1112 20:51:13.873511 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.873618 kubelet[3519]: W1112 20:51:13.873527 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.873618 kubelet[3519]: E1112 20:51:13.873541 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.873838 kubelet[3519]: E1112 20:51:13.873802 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.873838 kubelet[3519]: W1112 20:51:13.873817 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.873838 kubelet[3519]: E1112 20:51:13.873831 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.874245 kubelet[3519]: E1112 20:51:13.874097 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.874245 kubelet[3519]: W1112 20:51:13.874108 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.874245 kubelet[3519]: E1112 20:51:13.874121 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.875242 kubelet[3519]: E1112 20:51:13.874381 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.875242 kubelet[3519]: W1112 20:51:13.874391 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.875242 kubelet[3519]: E1112 20:51:13.874405 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.968577 kubelet[3519]: E1112 20:51:13.968463 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.968577 kubelet[3519]: W1112 20:51:13.968494 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.968577 kubelet[3519]: E1112 20:51:13.968519 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.971291 kubelet[3519]: E1112 20:51:13.971261 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.971291 kubelet[3519]: W1112 20:51:13.971288 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.971453 kubelet[3519]: E1112 20:51:13.971313 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.972363 kubelet[3519]: E1112 20:51:13.972338 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.972488 kubelet[3519]: W1112 20:51:13.972362 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.972488 kubelet[3519]: E1112 20:51:13.972402 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.973527 kubelet[3519]: E1112 20:51:13.973307 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.973527 kubelet[3519]: W1112 20:51:13.973394 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.976610 kubelet[3519]: E1112 20:51:13.974813 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.976864 kubelet[3519]: E1112 20:51:13.976650 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.976864 kubelet[3519]: W1112 20:51:13.976698 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.978885 kubelet[3519]: E1112 20:51:13.977172 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.978885 kubelet[3519]: E1112 20:51:13.978196 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.978885 kubelet[3519]: W1112 20:51:13.978242 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.978885 kubelet[3519]: E1112 20:51:13.978267 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.980894 kubelet[3519]: E1112 20:51:13.980323 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.980894 kubelet[3519]: W1112 20:51:13.980374 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.980894 kubelet[3519]: E1112 20:51:13.980472 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.984918 kubelet[3519]: E1112 20:51:13.984015 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.984918 kubelet[3519]: W1112 20:51:13.984190 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.984918 kubelet[3519]: E1112 20:51:13.984425 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.987367 kubelet[3519]: E1112 20:51:13.987335 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.987533 kubelet[3519]: W1112 20:51:13.987365 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.990826 kubelet[3519]: E1112 20:51:13.989955 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.990826 kubelet[3519]: W1112 20:51:13.989977 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.992568 kubelet[3519]: E1112 20:51:13.992536 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.992682 kubelet[3519]: E1112 20:51:13.992577 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.994217 kubelet[3519]: E1112 20:51:13.992843 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.994217 kubelet[3519]: W1112 20:51:13.992871 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:13.994217 kubelet[3519]: E1112 20:51:13.992969 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:13.997626 containerd[1962]: time="2024-11-12T20:51:13.996784740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:51:13.999056 kubelet[3519]: E1112 20:51:13.999024 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:13.999056 kubelet[3519]: W1112 20:51:13.999051 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.000996 kubelet[3519]: E1112 20:51:14.000967 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.001839 kubelet[3519]: E1112 20:51:14.001801 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.001839 kubelet[3519]: W1112 20:51:14.001827 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.002428 containerd[1962]: time="2024-11-12T20:51:14.001314589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:51:14.002428 containerd[1962]: time="2024-11-12T20:51:14.001350314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:14.002428 containerd[1962]: time="2024-11-12T20:51:14.001491442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:14.002810 kubelet[3519]: E1112 20:51:14.002571 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.003511 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.011091 kubelet[3519]: W1112 20:51:14.003530 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.003812 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.005510 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.011091 kubelet[3519]: W1112 20:51:14.005528 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.005626 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.006151 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.011091 kubelet[3519]: W1112 20:51:14.006164 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.006348 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.011091 kubelet[3519]: E1112 20:51:14.006555 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.011625 kubelet[3519]: W1112 20:51:14.006565 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.011625 kubelet[3519]: E1112 20:51:14.006745 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.011625 kubelet[3519]: E1112 20:51:14.009700 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.011625 kubelet[3519]: W1112 20:51:14.009720 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.011625 kubelet[3519]: E1112 20:51:14.010235 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.012846 kubelet[3519]: E1112 20:51:14.012820 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.012846 kubelet[3519]: W1112 20:51:14.012841 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.015088 kubelet[3519]: E1112 20:51:14.013904 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.015088 kubelet[3519]: E1112 20:51:14.014480 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.015088 kubelet[3519]: W1112 20:51:14.014495 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.015088 kubelet[3519]: E1112 20:51:14.014773 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.015088 kubelet[3519]: E1112 20:51:14.014959 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.015088 kubelet[3519]: W1112 20:51:14.014970 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.015702 kubelet[3519]: E1112 20:51:14.015221 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.015702 kubelet[3519]: E1112 20:51:14.015623 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.015702 kubelet[3519]: W1112 20:51:14.015635 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.016007 kubelet[3519]: E1112 20:51:14.015916 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.017381 kubelet[3519]: E1112 20:51:14.016467 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.017381 kubelet[3519]: W1112 20:51:14.016484 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.021310 kubelet[3519]: E1112 20:51:14.021270 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.023088 kubelet[3519]: E1112 20:51:14.021567 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.023088 kubelet[3519]: W1112 20:51:14.021585 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.023088 kubelet[3519]: E1112 20:51:14.021611 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.026091 kubelet[3519]: E1112 20:51:14.023325 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.026091 kubelet[3519]: W1112 20:51:14.023343 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.026091 kubelet[3519]: E1112 20:51:14.023362 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.046259 kubelet[3519]: E1112 20:51:14.046179 3519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:51:14.046259 kubelet[3519]: W1112 20:51:14.046205 3519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:51:14.046259 kubelet[3519]: E1112 20:51:14.046230 3519 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:51:14.071455 systemd[1]: Started cri-containerd-65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169.scope - libcontainer container 65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169. Nov 12 20:51:14.156281 containerd[1962]: time="2024-11-12T20:51:14.156229215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksgbj,Uid:42215328-2581-4c3c-973d-7929b354c338,Namespace:calico-system,Attempt:0,} returns sandbox id \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\"" Nov 12 20:51:14.170850 containerd[1962]: time="2024-11-12T20:51:14.170780892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:51:14.214680 containerd[1962]: time="2024-11-12T20:51:14.214633765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fbd4c589-p4zqz,Uid:484376f9-8a7e-4414-90d6-5322b126da27,Namespace:calico-system,Attempt:0,} returns sandbox id \"320588942d1d87d03ff3bc6d5e9ccd9aa18e73a48c24ef18952b6e9ac356a48e\"" Nov 12 20:51:15.270229 kubelet[3519]: E1112 20:51:15.270097 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:15.679706 containerd[1962]: time="2024-11-12T20:51:15.679658064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:15.681349 containerd[1962]: time="2024-11-12T20:51:15.681189227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:51:15.683088 containerd[1962]: time="2024-11-12T20:51:15.683029291Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:15.692750 containerd[1962]: time="2024-11-12T20:51:15.692670415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:15.694724 containerd[1962]: time="2024-11-12T20:51:15.694672345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.523834912s" Nov 12 20:51:15.694859 containerd[1962]: time="2024-11-12T20:51:15.694728869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:51:15.696457 containerd[1962]: time="2024-11-12T20:51:15.696394666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:51:15.710415 containerd[1962]: time="2024-11-12T20:51:15.710372954Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:51:15.746872 containerd[1962]: time="2024-11-12T20:51:15.746822691Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\"" Nov 12 20:51:15.753138 containerd[1962]: time="2024-11-12T20:51:15.751680043Z" level=info msg="StartContainer for \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\"" Nov 12 20:51:15.824353 systemd[1]: Started cri-containerd-a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c.scope - libcontainer container a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c. Nov 12 20:51:15.863599 containerd[1962]: time="2024-11-12T20:51:15.863554896Z" level=info msg="StartContainer for \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\" returns successfully" Nov 12 20:51:15.909365 systemd[1]: cri-containerd-a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c.scope: Deactivated successfully. Nov 12 20:51:15.973227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c-rootfs.mount: Deactivated successfully. Nov 12 20:51:16.269466 containerd[1962]: time="2024-11-12T20:51:16.228014435Z" level=info msg="shim disconnected" id=a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c namespace=k8s.io Nov 12 20:51:16.269466 containerd[1962]: time="2024-11-12T20:51:16.268746326Z" level=warning msg="cleaning up after shim disconnected" id=a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c namespace=k8s.io Nov 12 20:51:16.269466 containerd[1962]: time="2024-11-12T20:51:16.268772074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:16.294303 containerd[1962]: time="2024-11-12T20:51:16.293435741Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:51:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:51:17.263970 kubelet[3519]: E1112 20:51:17.263918 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:18.050806 containerd[1962]: time="2024-11-12T20:51:18.050757726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:18.052539 containerd[1962]: time="2024-11-12T20:51:18.052406916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:51:18.055655 containerd[1962]: time="2024-11-12T20:51:18.054318186Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:18.057298 containerd[1962]: time="2024-11-12T20:51:18.057242897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:18.058365 containerd[1962]: time="2024-11-12T20:51:18.058304115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.361868339s" Nov 12 20:51:18.058507 containerd[1962]: time="2024-11-12T20:51:18.058482571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:51:18.064006 containerd[1962]: time="2024-11-12T20:51:18.063282693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:51:18.084993 containerd[1962]: time="2024-11-12T20:51:18.084949410Z" level=info msg="CreateContainer within sandbox \"320588942d1d87d03ff3bc6d5e9ccd9aa18e73a48c24ef18952b6e9ac356a48e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:51:18.118398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577191890.mount: Deactivated successfully. Nov 12 20:51:18.139517 containerd[1962]: time="2024-11-12T20:51:18.139467314Z" level=info msg="CreateContainer within sandbox \"320588942d1d87d03ff3bc6d5e9ccd9aa18e73a48c24ef18952b6e9ac356a48e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a72c454609302a686e610f4f3cc8aa07155d7aa4962d23d340bb44b1672f63bf\"" Nov 12 20:51:18.141792 containerd[1962]: time="2024-11-12T20:51:18.140349268Z" level=info msg="StartContainer for \"a72c454609302a686e610f4f3cc8aa07155d7aa4962d23d340bb44b1672f63bf\"" Nov 12 20:51:18.206030 systemd[1]: Started cri-containerd-a72c454609302a686e610f4f3cc8aa07155d7aa4962d23d340bb44b1672f63bf.scope - libcontainer container a72c454609302a686e610f4f3cc8aa07155d7aa4962d23d340bb44b1672f63bf. Nov 12 20:51:18.262760 containerd[1962]: time="2024-11-12T20:51:18.262714021Z" level=info msg="StartContainer for \"a72c454609302a686e610f4f3cc8aa07155d7aa4962d23d340bb44b1672f63bf\" returns successfully" Nov 12 20:51:19.264417 kubelet[3519]: E1112 20:51:19.264369 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:19.514088 kubelet[3519]: I1112 20:51:19.513877 3519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:51:21.274439 kubelet[3519]: E1112 20:51:21.274379 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:23.265487 kubelet[3519]: E1112 20:51:23.263546 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:23.354216 containerd[1962]: time="2024-11-12T20:51:23.354174577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:23.356688 containerd[1962]: time="2024-11-12T20:51:23.356507529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:51:23.359778 containerd[1962]: time="2024-11-12T20:51:23.358171773Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:23.361215 containerd[1962]: time="2024-11-12T20:51:23.361179419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:23.362622 containerd[1962]: time="2024-11-12T20:51:23.362194111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.297927512s" Nov 12 20:51:23.362622 containerd[1962]: time="2024-11-12T20:51:23.362233126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:51:23.366643 containerd[1962]: time="2024-11-12T20:51:23.366601230Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:51:23.427354 containerd[1962]: time="2024-11-12T20:51:23.426742266Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\"" Nov 12 20:51:23.429974 containerd[1962]: time="2024-11-12T20:51:23.429574693Z" level=info msg="StartContainer for \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\"" Nov 12 20:51:23.510724 systemd[1]: run-containerd-runc-k8s.io-f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f-runc.RCDXgM.mount: Deactivated successfully. Nov 12 20:51:23.520493 systemd[1]: Started cri-containerd-f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f.scope - libcontainer container f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f. Nov 12 20:51:23.638800 containerd[1962]: time="2024-11-12T20:51:23.638727841Z" level=info msg="StartContainer for \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\" returns successfully" Nov 12 20:51:24.574492 kubelet[3519]: I1112 20:51:24.570603 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64fbd4c589-p4zqz" podStartSLOduration=7.726624779 podStartE2EDuration="11.570581826s" podCreationTimestamp="2024-11-12 20:51:13 +0000 UTC" firstStartedPulling="2024-11-12 20:51:14.216703306 +0000 UTC m=+25.236556827" lastFinishedPulling="2024-11-12 20:51:18.060660351 +0000 UTC m=+29.080513874" observedRunningTime="2024-11-12 20:51:18.498833026 +0000 UTC m=+29.518686559" watchObservedRunningTime="2024-11-12 20:51:24.570581826 +0000 UTC m=+35.590435398" Nov 12 20:51:24.884165 systemd[1]: cri-containerd-f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f.scope: Deactivated successfully. Nov 12 20:51:24.966731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f-rootfs.mount: Deactivated successfully. Nov 12 20:51:24.978170 kubelet[3519]: I1112 20:51:24.977678 3519 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:51:25.046995 kubelet[3519]: I1112 20:51:25.046946 3519 topology_manager.go:215] "Topology Admit Handler" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zhn5h" Nov 12 20:51:25.068359 kubelet[3519]: I1112 20:51:25.065400 3519 topology_manager.go:215] "Topology Admit Handler" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8kwmk" Nov 12 20:51:25.068054 systemd[1]: Created slice kubepods-burstable-podb9630a59_9ff8_489c_b4ee_f423326fdc24.slice - libcontainer container kubepods-burstable-podb9630a59_9ff8_489c_b4ee_f423326fdc24.slice. Nov 12 20:51:25.073255 containerd[1962]: time="2024-11-12T20:51:25.073009425Z" level=info msg="shim disconnected" id=f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f namespace=k8s.io Nov 12 20:51:25.073255 containerd[1962]: time="2024-11-12T20:51:25.073250155Z" level=warning msg="cleaning up after shim disconnected" id=f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f namespace=k8s.io Nov 12 20:51:25.073908 containerd[1962]: time="2024-11-12T20:51:25.073274561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:25.079589 kubelet[3519]: I1112 20:51:25.077309 3519 topology_manager.go:215] "Topology Admit Handler" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" podNamespace="calico-apiserver" podName="calico-apiserver-7466d5c5cc-hqszx" Nov 12 20:51:25.096228 kubelet[3519]: I1112 20:51:25.092566 3519 topology_manager.go:215] "Topology Admit Handler" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" podNamespace="calico-system" podName="calico-kube-controllers-576c7b7594-qkhxb" Nov 12 20:51:25.096228 kubelet[3519]: I1112 20:51:25.092792 3519 topology_manager.go:215] "Topology Admit Handler" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" podNamespace="calico-apiserver" podName="calico-apiserver-7466d5c5cc-sz8kp" Nov 12 20:51:25.111771 systemd[1]: Created slice kubepods-burstable-pode5610ae2_edd8_4453_84a0_6212f12ff6f6.slice - libcontainer container kubepods-burstable-pode5610ae2_edd8_4453_84a0_6212f12ff6f6.slice. Nov 12 20:51:25.130894 systemd[1]: Created slice kubepods-besteffort-pod4bebdd9c_33eb_4dd5_9f95_d6fe88644bc6.slice - libcontainer container kubepods-besteffort-pod4bebdd9c_33eb_4dd5_9f95_d6fe88644bc6.slice. Nov 12 20:51:25.133807 containerd[1962]: time="2024-11-12T20:51:25.132357592Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:51:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:51:25.144795 systemd[1]: Created slice kubepods-besteffort-pode2e7be2e_a36d_4ab1_8ede_5e860bf07447.slice - libcontainer container kubepods-besteffort-pode2e7be2e_a36d_4ab1_8ede_5e860bf07447.slice. Nov 12 20:51:25.160004 systemd[1]: Created slice kubepods-besteffort-pod45411fe7_dc80_47ef_ab22_476dbc16d243.slice - libcontainer container kubepods-besteffort-pod45411fe7_dc80_47ef_ab22_476dbc16d243.slice. Nov 12 20:51:25.217852 kubelet[3519]: I1112 20:51:25.217801 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjr6g\" (UniqueName: \"kubernetes.io/projected/45411fe7-dc80-47ef-ab22-476dbc16d243-kube-api-access-gjr6g\") pod \"calico-apiserver-7466d5c5cc-sz8kp\" (UID: \"45411fe7-dc80-47ef-ab22-476dbc16d243\") " pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" Nov 12 20:51:25.218086 kubelet[3519]: I1112 20:51:25.218048 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/45411fe7-dc80-47ef-ab22-476dbc16d243-calico-apiserver-certs\") pod \"calico-apiserver-7466d5c5cc-sz8kp\" (UID: \"45411fe7-dc80-47ef-ab22-476dbc16d243\") " pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" Nov 12 20:51:25.218220 kubelet[3519]: I1112 20:51:25.218195 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e7be2e-a36d-4ab1-8ede-5e860bf07447-tigera-ca-bundle\") pod \"calico-kube-controllers-576c7b7594-qkhxb\" (UID: \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\") " pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" Nov 12 20:51:25.218301 kubelet[3519]: I1112 20:51:25.218230 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg9s\" (UniqueName: \"kubernetes.io/projected/e2e7be2e-a36d-4ab1-8ede-5e860bf07447-kube-api-access-dzg9s\") pod \"calico-kube-controllers-576c7b7594-qkhxb\" (UID: \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\") " pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" Nov 12 20:51:25.218301 kubelet[3519]: I1112 20:51:25.218277 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6-calico-apiserver-certs\") pod \"calico-apiserver-7466d5c5cc-hqszx\" (UID: \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\") " pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" Nov 12 20:51:25.218477 kubelet[3519]: I1112 20:51:25.218311 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fj4\" (UniqueName: \"kubernetes.io/projected/4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6-kube-api-access-j2fj4\") pod \"calico-apiserver-7466d5c5cc-hqszx\" (UID: \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\") " pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" Nov 12 20:51:25.218477 kubelet[3519]: I1112 20:51:25.218342 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdmj\" (UniqueName: \"kubernetes.io/projected/b9630a59-9ff8-489c-b4ee-f423326fdc24-kube-api-access-fzdmj\") pod \"coredns-7db6d8ff4d-zhn5h\" (UID: \"b9630a59-9ff8-489c-b4ee-f423326fdc24\") " pod="kube-system/coredns-7db6d8ff4d-zhn5h" Nov 12 20:51:25.218477 kubelet[3519]: I1112 20:51:25.218368 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9630a59-9ff8-489c-b4ee-f423326fdc24-config-volume\") pod \"coredns-7db6d8ff4d-zhn5h\" (UID: \"b9630a59-9ff8-489c-b4ee-f423326fdc24\") " pod="kube-system/coredns-7db6d8ff4d-zhn5h" Nov 12 20:51:25.218477 kubelet[3519]: I1112 20:51:25.218392 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5610ae2-edd8-4453-84a0-6212f12ff6f6-config-volume\") pod \"coredns-7db6d8ff4d-8kwmk\" (UID: \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\") " pod="kube-system/coredns-7db6d8ff4d-8kwmk" Nov 12 20:51:25.218477 kubelet[3519]: I1112 20:51:25.218415 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5mw\" (UniqueName: \"kubernetes.io/projected/e5610ae2-edd8-4453-84a0-6212f12ff6f6-kube-api-access-vl5mw\") pod \"coredns-7db6d8ff4d-8kwmk\" (UID: \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\") " pod="kube-system/coredns-7db6d8ff4d-8kwmk" Nov 12 20:51:25.272845 systemd[1]: Created slice kubepods-besteffort-pod579a082e_68d8_4be8_b0d9_8983607906fe.slice - libcontainer container kubepods-besteffort-pod579a082e_68d8_4be8_b0d9_8983607906fe.slice. Nov 12 20:51:25.276334 containerd[1962]: time="2024-11-12T20:51:25.276294125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xw4fv,Uid:579a082e-68d8-4be8-b0d9-8983607906fe,Namespace:calico-system,Attempt:0,}" Nov 12 20:51:25.432106 containerd[1962]: time="2024-11-12T20:51:25.428903115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kwmk,Uid:e5610ae2-edd8-4453-84a0-6212f12ff6f6,Namespace:kube-system,Attempt:0,}" Nov 12 20:51:25.442919 containerd[1962]: time="2024-11-12T20:51:25.442630817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-hqszx,Uid:4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:51:25.458596 containerd[1962]: time="2024-11-12T20:51:25.458538547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576c7b7594-qkhxb,Uid:e2e7be2e-a36d-4ab1-8ede-5e860bf07447,Namespace:calico-system,Attempt:0,}" Nov 12 20:51:25.468364 containerd[1962]: time="2024-11-12T20:51:25.467845957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-sz8kp,Uid:45411fe7-dc80-47ef-ab22-476dbc16d243,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:51:25.591346 containerd[1962]: time="2024-11-12T20:51:25.591301174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:51:25.706844 containerd[1962]: time="2024-11-12T20:51:25.706431136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhn5h,Uid:b9630a59-9ff8-489c-b4ee-f423326fdc24,Namespace:kube-system,Attempt:0,}" Nov 12 20:51:25.792825 containerd[1962]: time="2024-11-12T20:51:25.792654456Z" level=error msg="Failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.807336 containerd[1962]: time="2024-11-12T20:51:25.807257932Z" level=error msg="Failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.815049 containerd[1962]: time="2024-11-12T20:51:25.812663514Z" level=error msg="encountered an error cleaning up failed sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.815049 containerd[1962]: time="2024-11-12T20:51:25.814859891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-hqszx,Uid:4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.816207 containerd[1962]: time="2024-11-12T20:51:25.816163448Z" level=error msg="encountered an error cleaning up failed sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.816321 containerd[1962]: time="2024-11-12T20:51:25.816236489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xw4fv,Uid:579a082e-68d8-4be8-b0d9-8983607906fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.826319 containerd[1962]: time="2024-11-12T20:51:25.826268420Z" level=error msg="Failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.828835 kubelet[3519]: E1112 20:51:25.828765 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.829496 kubelet[3519]: E1112 20:51:25.828870 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" Nov 12 20:51:25.829496 kubelet[3519]: E1112 20:51:25.828902 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" Nov 12 20:51:25.829496 kubelet[3519]: E1112 20:51:25.828765 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.829750 kubelet[3519]: E1112 20:51:25.828951 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7466d5c5cc-hqszx_calico-apiserver(4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7466d5c5cc-hqszx_calico-apiserver(4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" Nov 12 20:51:25.829750 kubelet[3519]: E1112 20:51:25.828972 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:25.829750 kubelet[3519]: E1112 20:51:25.828994 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xw4fv" Nov 12 20:51:25.830251 kubelet[3519]: E1112 20:51:25.830170 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xw4fv_calico-system(579a082e-68d8-4be8-b0d9-8983607906fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xw4fv_calico-system(579a082e-68d8-4be8-b0d9-8983607906fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:25.831380 containerd[1962]: time="2024-11-12T20:51:25.831333940Z" level=error msg="encountered an error cleaning up failed sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.832476 containerd[1962]: time="2024-11-12T20:51:25.831410005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kwmk,Uid:e5610ae2-edd8-4453-84a0-6212f12ff6f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.832549 kubelet[3519]: E1112 20:51:25.831616 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.832549 kubelet[3519]: E1112 20:51:25.831654 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8kwmk" Nov 12 20:51:25.832549 kubelet[3519]: E1112 20:51:25.831676 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8kwmk" Nov 12 20:51:25.832703 kubelet[3519]: E1112 20:51:25.831729 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8kwmk_kube-system(e5610ae2-edd8-4453-84a0-6212f12ff6f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8kwmk_kube-system(e5610ae2-edd8-4453-84a0-6212f12ff6f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:51:25.860479 containerd[1962]: time="2024-11-12T20:51:25.860191325Z" level=error msg="Failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.861083 containerd[1962]: time="2024-11-12T20:51:25.860943879Z" level=error msg="encountered an error cleaning up failed sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.861191 containerd[1962]: time="2024-11-12T20:51:25.861116490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576c7b7594-qkhxb,Uid:e2e7be2e-a36d-4ab1-8ede-5e860bf07447,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.861649 kubelet[3519]: E1112 20:51:25.861508 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.861920 kubelet[3519]: E1112 20:51:25.861646 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" Nov 12 20:51:25.861920 kubelet[3519]: E1112 20:51:25.861674 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" Nov 12 20:51:25.861920 kubelet[3519]: E1112 20:51:25.861726 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-576c7b7594-qkhxb_calico-system(e2e7be2e-a36d-4ab1-8ede-5e860bf07447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-576c7b7594-qkhxb_calico-system(e2e7be2e-a36d-4ab1-8ede-5e860bf07447)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:51:25.919506 containerd[1962]: time="2024-11-12T20:51:25.919397498Z" level=error msg="Failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.920528 containerd[1962]: time="2024-11-12T20:51:25.919781196Z" level=error msg="encountered an error cleaning up failed sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.920528 containerd[1962]: time="2024-11-12T20:51:25.919845229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-sz8kp,Uid:45411fe7-dc80-47ef-ab22-476dbc16d243,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.922046 kubelet[3519]: E1112 20:51:25.920192 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:25.922046 kubelet[3519]: E1112 20:51:25.920313 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" Nov 12 20:51:25.922046 kubelet[3519]: E1112 20:51:25.920346 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" Nov 12 20:51:25.922377 kubelet[3519]: E1112 20:51:25.920424 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7466d5c5cc-sz8kp_calico-apiserver(45411fe7-dc80-47ef-ab22-476dbc16d243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7466d5c5cc-sz8kp_calico-apiserver(45411fe7-dc80-47ef-ab22-476dbc16d243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:51:26.003889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65-shm.mount: Deactivated successfully. Nov 12 20:51:26.030892 containerd[1962]: time="2024-11-12T20:51:26.029534323Z" level=error msg="Failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.033278 containerd[1962]: time="2024-11-12T20:51:26.031735526Z" level=error msg="encountered an error cleaning up failed sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.033278 containerd[1962]: time="2024-11-12T20:51:26.031830828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhn5h,Uid:b9630a59-9ff8-489c-b4ee-f423326fdc24,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.033636 kubelet[3519]: E1112 20:51:26.032092 3519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.033636 kubelet[3519]: E1112 20:51:26.032157 3519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhn5h" Nov 12 20:51:26.033636 kubelet[3519]: E1112 20:51:26.032182 3519 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhn5h" Nov 12 20:51:26.037852 kubelet[3519]: E1112 20:51:26.032236 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zhn5h_kube-system(b9630a59-9ff8-489c-b4ee-f423326fdc24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zhn5h_kube-system(b9630a59-9ff8-489c-b4ee-f423326fdc24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" Nov 12 20:51:26.040314 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10-shm.mount: Deactivated successfully. Nov 12 20:51:26.588320 kubelet[3519]: I1112 20:51:26.588215 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:51:26.606835 kubelet[3519]: I1112 20:51:26.604908 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:51:26.607306 containerd[1962]: time="2024-11-12T20:51:26.607210238Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:51:26.622400 containerd[1962]: time="2024-11-12T20:51:26.620746377Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:51:26.622400 containerd[1962]: time="2024-11-12T20:51:26.621107083Z" level=info msg="Ensure that sandbox e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e in task-service has been cleanup successfully" Nov 12 20:51:26.622639 containerd[1962]: time="2024-11-12T20:51:26.622609649Z" level=info msg="Ensure that sandbox e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65 in task-service has been cleanup successfully" Nov 12 20:51:26.632291 kubelet[3519]: I1112 20:51:26.627338 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:51:26.637096 containerd[1962]: time="2024-11-12T20:51:26.634029417Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:51:26.637096 containerd[1962]: time="2024-11-12T20:51:26.634274045Z" level=info msg="Ensure that sandbox 81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10 in task-service has been cleanup successfully" Nov 12 20:51:26.651817 kubelet[3519]: I1112 20:51:26.651781 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:51:26.654458 containerd[1962]: time="2024-11-12T20:51:26.654416191Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:51:26.654654 containerd[1962]: time="2024-11-12T20:51:26.654627702Z" level=info msg="Ensure that sandbox be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898 in task-service has been cleanup successfully" Nov 12 20:51:26.656918 kubelet[3519]: I1112 20:51:26.656890 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:51:26.658745 containerd[1962]: time="2024-11-12T20:51:26.658696306Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:51:26.661299 containerd[1962]: time="2024-11-12T20:51:26.661006789Z" level=info msg="Ensure that sandbox 1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f in task-service has been cleanup successfully" Nov 12 20:51:26.677331 kubelet[3519]: I1112 20:51:26.677135 3519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:51:26.679374 containerd[1962]: time="2024-11-12T20:51:26.678838475Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:51:26.685632 containerd[1962]: time="2024-11-12T20:51:26.685521426Z" level=info msg="Ensure that sandbox 1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8 in task-service has been cleanup successfully" Nov 12 20:51:26.946453 containerd[1962]: time="2024-11-12T20:51:26.945919919Z" level=error msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" failed" error="failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.946571 kubelet[3519]: E1112 20:51:26.946204 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:51:26.946571 kubelet[3519]: E1112 20:51:26.946278 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e"} Nov 12 20:51:26.946571 kubelet[3519]: E1112 20:51:26.946363 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:26.946571 kubelet[3519]: E1112 20:51:26.946395 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:51:26.956528 containerd[1962]: time="2024-11-12T20:51:26.956221933Z" level=error msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" failed" error="failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.956528 containerd[1962]: time="2024-11-12T20:51:26.956370865Z" level=error msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" failed" error="failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.956729 kubelet[3519]: E1112 20:51:26.956654 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:51:26.956729 kubelet[3519]: E1112 20:51:26.956709 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10"} Nov 12 20:51:26.956828 kubelet[3519]: E1112 20:51:26.956755 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:26.956828 kubelet[3519]: E1112 20:51:26.956791 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" Nov 12 20:51:26.956995 kubelet[3519]: E1112 20:51:26.956831 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:51:26.956995 kubelet[3519]: E1112 20:51:26.956854 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f"} Nov 12 20:51:26.956995 kubelet[3519]: E1112 20:51:26.956880 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:26.956995 kubelet[3519]: E1112 20:51:26.956904 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" Nov 12 20:51:26.977117 containerd[1962]: time="2024-11-12T20:51:26.977040879Z" level=error msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" failed" error="failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.978052 kubelet[3519]: E1112 20:51:26.977918 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:51:26.978499 kubelet[3519]: E1112 20:51:26.978467 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65"} Nov 12 20:51:26.978819 kubelet[3519]: E1112 20:51:26.978793 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:26.979002 kubelet[3519]: E1112 20:51:26.978973 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:26.994441 containerd[1962]: time="2024-11-12T20:51:26.994379695Z" level=error msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" failed" error="failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:26.999369 containerd[1962]: time="2024-11-12T20:51:26.996240782Z" level=error msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" failed" error="failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:27.002642 kubelet[3519]: E1112 20:51:26.998364 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:51:27.002642 kubelet[3519]: E1112 20:51:26.998418 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8"} Nov 12 20:51:27.002642 kubelet[3519]: E1112 20:51:26.998448 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:27.002642 kubelet[3519]: E1112 20:51:26.998479 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:51:27.004920 kubelet[3519]: E1112 20:51:26.998509 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:51:27.004920 kubelet[3519]: E1112 20:51:26.998526 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898"} Nov 12 20:51:27.004920 kubelet[3519]: E1112 20:51:26.998548 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:27.004920 kubelet[3519]: E1112 20:51:26.998570 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:51:33.042021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635705926.mount: Deactivated successfully. Nov 12 20:51:33.286684 containerd[1962]: time="2024-11-12T20:51:33.148840959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:51:33.335053 containerd[1962]: time="2024-11-12T20:51:33.334162514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 7.659766182s" Nov 12 20:51:33.335053 containerd[1962]: time="2024-11-12T20:51:33.334230470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:51:33.335053 containerd[1962]: time="2024-11-12T20:51:33.334374922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:33.435100 kubelet[3519]: I1112 20:51:33.427640 3519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:51:33.443934 containerd[1962]: time="2024-11-12T20:51:33.443891951Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:33.445188 containerd[1962]: time="2024-11-12T20:51:33.444860602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:51:33.580622 containerd[1962]: time="2024-11-12T20:51:33.580581984Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:51:33.789815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218801062.mount: Deactivated successfully. Nov 12 20:51:33.812820 containerd[1962]: time="2024-11-12T20:51:33.812758396Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\"" Nov 12 20:51:33.819199 containerd[1962]: time="2024-11-12T20:51:33.819153247Z" level=info msg="StartContainer for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\"" Nov 12 20:51:33.990378 systemd[1]: Started cri-containerd-69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af.scope - libcontainer container 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af. Nov 12 20:51:34.075986 containerd[1962]: time="2024-11-12T20:51:34.075766115Z" level=info msg="StartContainer for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\" returns successfully" Nov 12 20:51:34.463835 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:51:34.465917 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:51:34.536901 systemd[1]: cri-containerd-69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af.scope: Deactivated successfully. Nov 12 20:51:34.620383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af-rootfs.mount: Deactivated successfully. Nov 12 20:51:34.822018 containerd[1962]: time="2024-11-12T20:51:34.821643966Z" level=info msg="shim disconnected" id=69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af namespace=k8s.io Nov 12 20:51:34.822018 containerd[1962]: time="2024-11-12T20:51:34.821740410Z" level=warning msg="cleaning up after shim disconnected" id=69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af namespace=k8s.io Nov 12 20:51:34.822018 containerd[1962]: time="2024-11-12T20:51:34.821753370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:34.833561 kubelet[3519]: I1112 20:51:34.829028 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ksgbj" podStartSLOduration=2.5418157040000002 podStartE2EDuration="21.817154513s" podCreationTimestamp="2024-11-12 20:51:13 +0000 UTC" firstStartedPulling="2024-11-12 20:51:14.170022704 +0000 UTC m=+25.189876229" lastFinishedPulling="2024-11-12 20:51:33.445361526 +0000 UTC m=+44.465215038" observedRunningTime="2024-11-12 20:51:34.815488484 +0000 UTC m=+45.835342016" watchObservedRunningTime="2024-11-12 20:51:34.817154513 +0000 UTC m=+45.837008046" Nov 12 20:51:34.892984 containerd[1962]: time="2024-11-12T20:51:34.847955972Z" level=error msg="ExecSync for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" Nov 12 20:51:34.898002 kubelet[3519]: E1112 20:51:34.897850 3519 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" containerID="69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:51:34.900716 containerd[1962]: time="2024-11-12T20:51:34.900502738Z" level=error msg="ExecSync for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" Nov 12 20:51:34.901033 kubelet[3519]: E1112 20:51:34.900903 3519 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" containerID="69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:51:34.903410 containerd[1962]: time="2024-11-12T20:51:34.903355984Z" level=error msg="ExecSync for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" Nov 12 20:51:34.904686 kubelet[3519]: E1112 20:51:34.903946 3519 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af not found: not found" containerID="69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Nov 12 20:51:35.756260 kubelet[3519]: I1112 20:51:35.756203 3519 scope.go:117] "RemoveContainer" containerID="69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af" Nov 12 20:51:35.774009 containerd[1962]: time="2024-11-12T20:51:35.773950947Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Nov 12 20:51:35.851477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426326571.mount: Deactivated successfully. Nov 12 20:51:35.852881 containerd[1962]: time="2024-11-12T20:51:35.852395132Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29\"" Nov 12 20:51:35.857141 containerd[1962]: time="2024-11-12T20:51:35.854960016Z" level=info msg="StartContainer for \"ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29\"" Nov 12 20:51:35.909528 systemd[1]: Started cri-containerd-ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29.scope - libcontainer container ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29. Nov 12 20:51:35.953519 containerd[1962]: time="2024-11-12T20:51:35.953405831Z" level=info msg="StartContainer for \"ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29\" returns successfully" Nov 12 20:51:36.072211 systemd[1]: cri-containerd-ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29.scope: Deactivated successfully. Nov 12 20:51:36.111527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29-rootfs.mount: Deactivated successfully. Nov 12 20:51:36.119339 containerd[1962]: time="2024-11-12T20:51:36.119268115Z" level=info msg="shim disconnected" id=ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29 namespace=k8s.io Nov 12 20:51:36.119339 containerd[1962]: time="2024-11-12T20:51:36.119331106Z" level=warning msg="cleaning up after shim disconnected" id=ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29 namespace=k8s.io Nov 12 20:51:36.119339 containerd[1962]: time="2024-11-12T20:51:36.119344121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:36.787720 kubelet[3519]: I1112 20:51:36.787687 3519 scope.go:117] "RemoveContainer" containerID="69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af" Nov 12 20:51:36.788299 kubelet[3519]: I1112 20:51:36.788181 3519 scope.go:117] "RemoveContainer" containerID="ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29" Nov 12 20:51:36.796479 kubelet[3519]: E1112 20:51:36.795621 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-ksgbj_calico-system(42215328-2581-4c3c-973d-7929b354c338)\"" pod="calico-system/calico-node-ksgbj" podUID="42215328-2581-4c3c-973d-7929b354c338" Nov 12 20:51:36.797473 containerd[1962]: time="2024-11-12T20:51:36.797204786Z" level=info msg="RemoveContainer for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\"" Nov 12 20:51:36.807928 containerd[1962]: time="2024-11-12T20:51:36.807436910Z" level=info msg="RemoveContainer for \"69b4822b711db8872c7307d93ff37bbbe4a1e674f31d53586fac63f5054b18af\" returns successfully" Nov 12 20:51:36.882899 systemd[1]: Started sshd@9-172.31.18.222:22-139.178.89.65:38740.service - OpenSSH per-connection server daemon (139.178.89.65:38740). Nov 12 20:51:37.105702 sshd[4687]: Accepted publickey for core from 139.178.89.65 port 38740 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:37.110225 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:37.117591 systemd-logind[1947]: New session 10 of user core. Nov 12 20:51:37.123488 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:51:37.461222 sshd[4687]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:37.465533 systemd[1]: sshd@9-172.31.18.222:22-139.178.89.65:38740.service: Deactivated successfully. Nov 12 20:51:37.468002 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:51:37.471358 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:51:37.472858 systemd-logind[1947]: Removed session 10. Nov 12 20:51:37.796475 kubelet[3519]: I1112 20:51:37.796302 3519 scope.go:117] "RemoveContainer" containerID="ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29" Nov 12 20:51:37.798764 kubelet[3519]: E1112 20:51:37.796989 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-ksgbj_calico-system(42215328-2581-4c3c-973d-7929b354c338)\"" pod="calico-system/calico-node-ksgbj" podUID="42215328-2581-4c3c-973d-7929b354c338" Nov 12 20:51:38.266959 containerd[1962]: time="2024-11-12T20:51:38.266907545Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:51:38.316890 containerd[1962]: time="2024-11-12T20:51:38.316548192Z" level=error msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" failed" error="failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:38.317962 kubelet[3519]: E1112 20:51:38.317826 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:51:38.318879 kubelet[3519]: E1112 20:51:38.318760 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65"} Nov 12 20:51:38.322205 kubelet[3519]: E1112 20:51:38.322165 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:38.322397 kubelet[3519]: E1112 20:51:38.322228 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:39.268697 containerd[1962]: time="2024-11-12T20:51:39.263410869Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:51:39.268697 containerd[1962]: time="2024-11-12T20:51:39.264633243Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:51:39.274481 containerd[1962]: time="2024-11-12T20:51:39.274395076Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:51:39.368090 containerd[1962]: time="2024-11-12T20:51:39.367290644Z" level=error msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" failed" error="failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:39.368395 kubelet[3519]: E1112 20:51:39.367614 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:51:39.368395 kubelet[3519]: E1112 20:51:39.367674 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898"} Nov 12 20:51:39.368395 kubelet[3519]: E1112 20:51:39.367789 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:39.368395 kubelet[3519]: E1112 20:51:39.368087 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:51:39.385804 containerd[1962]: time="2024-11-12T20:51:39.385717957Z" level=error msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" failed" error="failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:39.386395 kubelet[3519]: E1112 20:51:39.386259 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:51:39.386506 kubelet[3519]: E1112 20:51:39.386414 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f"} Nov 12 20:51:39.386506 kubelet[3519]: E1112 20:51:39.386458 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:39.386506 kubelet[3519]: E1112 20:51:39.386490 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" Nov 12 20:51:39.387578 containerd[1962]: time="2024-11-12T20:51:39.387530336Z" level=error msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" failed" error="failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:39.387769 kubelet[3519]: E1112 20:51:39.387715 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:51:39.387833 kubelet[3519]: E1112 20:51:39.387768 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e"} Nov 12 20:51:39.387833 kubelet[3519]: E1112 20:51:39.387808 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:39.388007 kubelet[3519]: E1112 20:51:39.387838 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:51:41.268690 containerd[1962]: time="2024-11-12T20:51:41.263914344Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:51:41.269839 containerd[1962]: time="2024-11-12T20:51:41.269611562Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:51:41.358825 containerd[1962]: time="2024-11-12T20:51:41.358601220Z" level=error msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" failed" error="failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:41.358983 kubelet[3519]: E1112 20:51:41.358891 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:51:41.358983 kubelet[3519]: E1112 20:51:41.358950 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10"} Nov 12 20:51:41.359565 kubelet[3519]: E1112 20:51:41.358992 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:41.359565 kubelet[3519]: E1112 20:51:41.359025 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" Nov 12 20:51:41.364044 containerd[1962]: time="2024-11-12T20:51:41.363987050Z" level=error msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" failed" error="failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:41.364290 kubelet[3519]: E1112 20:51:41.364238 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:51:41.364367 kubelet[3519]: E1112 20:51:41.364289 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8"} Nov 12 20:51:41.364367 kubelet[3519]: E1112 20:51:41.364331 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:41.364505 kubelet[3519]: E1112 20:51:41.364366 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:51:42.502867 systemd[1]: Started sshd@10-172.31.18.222:22-139.178.89.65:39258.service - OpenSSH per-connection server daemon (139.178.89.65:39258). Nov 12 20:51:42.675893 sshd[4816]: Accepted publickey for core from 139.178.89.65 port 39258 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:42.677587 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:42.683632 systemd-logind[1947]: New session 11 of user core. Nov 12 20:51:42.692023 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:51:42.910847 sshd[4816]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:42.914271 systemd[1]: sshd@10-172.31.18.222:22-139.178.89.65:39258.service: Deactivated successfully. Nov 12 20:51:42.917539 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:51:42.920316 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:51:42.922100 systemd-logind[1947]: Removed session 11. Nov 12 20:51:45.885848 kubelet[3519]: I1112 20:51:45.885810 3519 scope.go:117] "RemoveContainer" containerID="ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29" Nov 12 20:51:45.886680 kubelet[3519]: E1112 20:51:45.886605 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-ksgbj_calico-system(42215328-2581-4c3c-973d-7929b354c338)\"" pod="calico-system/calico-node-ksgbj" podUID="42215328-2581-4c3c-973d-7929b354c338" Nov 12 20:51:47.945470 systemd[1]: Started sshd@11-172.31.18.222:22-139.178.89.65:41356.service - OpenSSH per-connection server daemon (139.178.89.65:41356). Nov 12 20:51:48.117345 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 41356 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:48.120544 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:48.134276 systemd-logind[1947]: New session 12 of user core. Nov 12 20:51:48.141679 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:51:48.416295 sshd[4830]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:48.421675 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:51:48.422451 systemd[1]: sshd@11-172.31.18.222:22-139.178.89.65:41356.service: Deactivated successfully. Nov 12 20:51:48.425686 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:51:48.426970 systemd-logind[1947]: Removed session 12. Nov 12 20:51:48.457119 systemd[1]: Started sshd@12-172.31.18.222:22-139.178.89.65:41364.service - OpenSSH per-connection server daemon (139.178.89.65:41364). Nov 12 20:51:48.621699 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 41364 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:48.623908 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:48.630165 systemd-logind[1947]: New session 13 of user core. Nov 12 20:51:48.635374 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:51:49.024440 sshd[4844]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:49.033288 systemd[1]: sshd@12-172.31.18.222:22-139.178.89.65:41364.service: Deactivated successfully. Nov 12 20:51:49.044170 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:51:49.049440 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:51:49.068755 systemd[1]: Started sshd@13-172.31.18.222:22-139.178.89.65:41380.service - OpenSSH per-connection server daemon (139.178.89.65:41380). Nov 12 20:51:49.074155 systemd-logind[1947]: Removed session 13. Nov 12 20:51:49.277093 sshd[4855]: Accepted publickey for core from 139.178.89.65 port 41380 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:49.280722 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:49.291501 systemd-logind[1947]: New session 14 of user core. Nov 12 20:51:49.300311 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:51:49.648912 sshd[4855]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:49.657866 systemd[1]: sshd@13-172.31.18.222:22-139.178.89.65:41380.service: Deactivated successfully. Nov 12 20:51:49.675662 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:51:49.682926 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:51:49.686260 systemd-logind[1947]: Removed session 14. Nov 12 20:51:50.263107 containerd[1962]: time="2024-11-12T20:51:50.263033190Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:51:50.347294 containerd[1962]: time="2024-11-12T20:51:50.347235199Z" level=error msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" failed" error="failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:50.347728 kubelet[3519]: E1112 20:51:50.347469 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:51:50.347728 kubelet[3519]: E1112 20:51:50.347524 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898"} Nov 12 20:51:50.347728 kubelet[3519]: E1112 20:51:50.347562 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:50.347728 kubelet[3519]: E1112 20:51:50.347589 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:51:51.266372 containerd[1962]: time="2024-11-12T20:51:51.266331085Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:51:51.313145 containerd[1962]: time="2024-11-12T20:51:51.313036715Z" level=error msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" failed" error="failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:51.313530 kubelet[3519]: E1112 20:51:51.313256 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:51:51.313530 kubelet[3519]: E1112 20:51:51.313306 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e"} Nov 12 20:51:51.313530 kubelet[3519]: E1112 20:51:51.313337 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:51.313530 kubelet[3519]: E1112 20:51:51.313358 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:51:52.263549 containerd[1962]: time="2024-11-12T20:51:52.263500523Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:51:52.320230 containerd[1962]: time="2024-11-12T20:51:52.320142396Z" level=error msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" failed" error="failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:52.321330 kubelet[3519]: E1112 20:51:52.320672 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:51:52.321330 kubelet[3519]: E1112 20:51:52.320766 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65"} Nov 12 20:51:52.321330 kubelet[3519]: E1112 20:51:52.320984 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:52.321330 kubelet[3519]: E1112 20:51:52.321022 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:51:54.264093 containerd[1962]: time="2024-11-12T20:51:54.264023176Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:51:54.328364 containerd[1962]: time="2024-11-12T20:51:54.328312491Z" level=error msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" failed" error="failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:54.328645 kubelet[3519]: E1112 20:51:54.328598 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:51:54.329021 kubelet[3519]: E1112 20:51:54.328660 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f"} Nov 12 20:51:54.329021 kubelet[3519]: E1112 20:51:54.328702 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:54.329021 kubelet[3519]: E1112 20:51:54.328735 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" Nov 12 20:51:54.685493 systemd[1]: Started sshd@14-172.31.18.222:22-139.178.89.65:41382.service - OpenSSH per-connection server daemon (139.178.89.65:41382). Nov 12 20:51:54.844928 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 41382 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:51:54.846628 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:54.854740 systemd-logind[1947]: New session 15 of user core. Nov 12 20:51:54.862295 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:51:55.118820 sshd[4946]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:55.123982 systemd[1]: sshd@14-172.31.18.222:22-139.178.89.65:41382.service: Deactivated successfully. Nov 12 20:51:55.126789 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:51:55.128971 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:51:55.134981 systemd-logind[1947]: Removed session 15. Nov 12 20:51:55.264413 containerd[1962]: time="2024-11-12T20:51:55.263643412Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:51:55.330702 containerd[1962]: time="2024-11-12T20:51:55.330510964Z" level=error msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" failed" error="failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:55.331014 kubelet[3519]: E1112 20:51:55.330973 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:51:55.331428 kubelet[3519]: E1112 20:51:55.331027 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8"} Nov 12 20:51:55.331428 kubelet[3519]: E1112 20:51:55.331092 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:55.331428 kubelet[3519]: E1112 20:51:55.331125 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:51:56.262585 containerd[1962]: time="2024-11-12T20:51:56.262279106Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:51:56.307485 containerd[1962]: time="2024-11-12T20:51:56.307421819Z" level=error msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" failed" error="failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:51:56.308119 kubelet[3519]: E1112 20:51:56.307652 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:51:56.308119 kubelet[3519]: E1112 20:51:56.307709 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10"} Nov 12 20:51:56.308119 kubelet[3519]: E1112 20:51:56.307753 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:51:56.308119 kubelet[3519]: E1112 20:51:56.307783 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" Nov 12 20:51:59.263796 kubelet[3519]: I1112 20:51:59.262232 3519 scope.go:117] "RemoveContainer" containerID="ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29" Nov 12 20:51:59.277039 containerd[1962]: time="2024-11-12T20:51:59.276955121Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Nov 12 20:51:59.303755 containerd[1962]: time="2024-11-12T20:51:59.303574161Z" level=info msg="CreateContainer within sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\"" Nov 12 20:51:59.308367 containerd[1962]: time="2024-11-12T20:51:59.304621536Z" level=info msg="StartContainer for \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\"" Nov 12 20:51:59.392579 systemd[1]: Started cri-containerd-4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd.scope - libcontainer container 4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd. Nov 12 20:51:59.476118 containerd[1962]: time="2024-11-12T20:51:59.475954896Z" level=info msg="StartContainer for \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\" returns successfully" Nov 12 20:51:59.590086 systemd[1]: cri-containerd-4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd.scope: Deactivated successfully. Nov 12 20:51:59.644288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd-rootfs.mount: Deactivated successfully. Nov 12 20:51:59.661023 containerd[1962]: time="2024-11-12T20:51:59.658662447Z" level=info msg="shim disconnected" id=4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd namespace=k8s.io Nov 12 20:51:59.661023 containerd[1962]: time="2024-11-12T20:51:59.658740452Z" level=warning msg="cleaning up after shim disconnected" id=4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd namespace=k8s.io Nov 12 20:51:59.661023 containerd[1962]: time="2024-11-12T20:51:59.658752850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:59.877309 kubelet[3519]: I1112 20:51:59.876989 3519 scope.go:117] "RemoveContainer" containerID="ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29" Nov 12 20:51:59.879484 kubelet[3519]: I1112 20:51:59.879453 3519 scope.go:117] "RemoveContainer" containerID="4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd" Nov 12 20:51:59.880537 containerd[1962]: time="2024-11-12T20:51:59.880424104Z" level=info msg="RemoveContainer for \"ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29\"" Nov 12 20:51:59.883385 kubelet[3519]: E1112 20:51:59.881021 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-ksgbj_calico-system(42215328-2581-4c3c-973d-7929b354c338)\"" pod="calico-system/calico-node-ksgbj" podUID="42215328-2581-4c3c-973d-7929b354c338" Nov 12 20:51:59.890503 containerd[1962]: time="2024-11-12T20:51:59.890456036Z" level=info msg="RemoveContainer for \"ffd7730f156a342e49c4c3548b8fe2ecf47d29198f88de7de1b7dab4ba1d0d29\" returns successfully" Nov 12 20:52:00.154422 systemd[1]: Started sshd@15-172.31.18.222:22-139.178.89.65:53208.service - OpenSSH per-connection server daemon (139.178.89.65:53208). Nov 12 20:52:00.368163 sshd[5061]: Accepted publickey for core from 139.178.89.65 port 53208 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:00.370042 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:00.375997 systemd-logind[1947]: New session 16 of user core. Nov 12 20:52:00.381276 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:52:00.703345 sshd[5061]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:00.711439 systemd[1]: sshd@15-172.31.18.222:22-139.178.89.65:53208.service: Deactivated successfully. Nov 12 20:52:00.714520 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:52:00.715694 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:52:00.718245 systemd-logind[1947]: Removed session 16. Nov 12 20:52:02.262503 containerd[1962]: time="2024-11-12T20:52:02.262449385Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:02.438427 containerd[1962]: time="2024-11-12T20:52:02.437937527Z" level=error msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" failed" error="failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:02.438658 kubelet[3519]: E1112 20:52:02.438260 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:02.444819 kubelet[3519]: E1112 20:52:02.438750 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e"} Nov 12 20:52:02.444819 kubelet[3519]: E1112 20:52:02.438801 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:02.444819 kubelet[3519]: E1112 20:52:02.438834 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:52:04.263712 containerd[1962]: time="2024-11-12T20:52:04.263351683Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:04.342404 containerd[1962]: time="2024-11-12T20:52:04.342352350Z" level=error msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" failed" error="failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:04.342771 kubelet[3519]: E1112 20:52:04.342729 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:04.343381 kubelet[3519]: E1112 20:52:04.342784 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898"} Nov 12 20:52:04.343381 kubelet[3519]: E1112 20:52:04.342829 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:04.343381 kubelet[3519]: E1112 20:52:04.342859 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:52:05.744497 systemd[1]: Started sshd@16-172.31.18.222:22-139.178.89.65:53210.service - OpenSSH per-connection server daemon (139.178.89.65:53210). Nov 12 20:52:05.946968 sshd[5113]: Accepted publickey for core from 139.178.89.65 port 53210 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:05.949044 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:05.954133 systemd-logind[1947]: New session 17 of user core. Nov 12 20:52:05.962320 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:52:06.204284 sshd[5113]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:06.216365 systemd[1]: sshd@16-172.31.18.222:22-139.178.89.65:53210.service: Deactivated successfully. Nov 12 20:52:06.227594 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:52:06.235584 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:52:06.253404 systemd-logind[1947]: Removed session 17. Nov 12 20:52:06.263336 containerd[1962]: time="2024-11-12T20:52:06.263031589Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:52:06.300363 containerd[1962]: time="2024-11-12T20:52:06.300312311Z" level=error msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" failed" error="failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:06.300672 kubelet[3519]: E1112 20:52:06.300526 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:06.301149 kubelet[3519]: E1112 20:52:06.300697 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65"} Nov 12 20:52:06.301149 kubelet[3519]: E1112 20:52:06.300744 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:06.301149 kubelet[3519]: E1112 20:52:06.300776 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"579a082e-68d8-4be8-b0d9-8983607906fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xw4fv" podUID="579a082e-68d8-4be8-b0d9-8983607906fe" Nov 12 20:52:08.262526 containerd[1962]: time="2024-11-12T20:52:08.262138805Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:08.300487 containerd[1962]: time="2024-11-12T20:52:08.300439009Z" level=error msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" failed" error="failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:08.300854 kubelet[3519]: E1112 20:52:08.300660 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:08.300854 kubelet[3519]: E1112 20:52:08.300705 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8"} Nov 12 20:52:08.300854 kubelet[3519]: E1112 20:52:08.300743 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:08.300854 kubelet[3519]: E1112 20:52:08.300765 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:52:09.266698 containerd[1962]: time="2024-11-12T20:52:09.266651501Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:52:09.329236 containerd[1962]: time="2024-11-12T20:52:09.328898726Z" level=error msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" failed" error="failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:09.329499 kubelet[3519]: E1112 20:52:09.329449 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:09.329950 kubelet[3519]: E1112 20:52:09.329515 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f"} Nov 12 20:52:09.329950 kubelet[3519]: E1112 20:52:09.329560 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:09.329950 kubelet[3519]: E1112 20:52:09.329593 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podUID="4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6" Nov 12 20:52:11.245705 systemd[1]: Started sshd@17-172.31.18.222:22-139.178.89.65:41896.service - OpenSSH per-connection server daemon (139.178.89.65:41896). Nov 12 20:52:11.437666 sshd[5184]: Accepted publickey for core from 139.178.89.65 port 41896 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:11.440783 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:11.468028 systemd-logind[1947]: New session 18 of user core. Nov 12 20:52:11.477951 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:52:11.735815 sshd[5184]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:11.740609 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:52:11.742257 systemd[1]: sshd@17-172.31.18.222:22-139.178.89.65:41896.service: Deactivated successfully. Nov 12 20:52:11.744503 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:52:11.745637 systemd-logind[1947]: Removed session 18. Nov 12 20:52:12.262966 containerd[1962]: time="2024-11-12T20:52:12.262612429Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:52:12.332250 containerd[1962]: time="2024-11-12T20:52:12.325214429Z" level=error msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" failed" error="failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:12.332832 kubelet[3519]: E1112 20:52:12.332774 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:12.333246 kubelet[3519]: E1112 20:52:12.332838 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10"} Nov 12 20:52:12.333246 kubelet[3519]: E1112 20:52:12.332881 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:12.333246 kubelet[3519]: E1112 20:52:12.332910 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9630a59-9ff8-489c-b4ee-f423326fdc24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podUID="b9630a59-9ff8-489c-b4ee-f423326fdc24" Nov 12 20:52:13.266549 containerd[1962]: time="2024-11-12T20:52:13.266056977Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:13.341549 containerd[1962]: time="2024-11-12T20:52:13.341408775Z" level=error msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" failed" error="failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:13.341930 kubelet[3519]: E1112 20:52:13.341885 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:13.346452 kubelet[3519]: E1112 20:52:13.346304 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e"} Nov 12 20:52:13.346452 kubelet[3519]: E1112 20:52:13.346379 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:13.346452 kubelet[3519]: E1112 20:52:13.346412 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e7be2e-a36d-4ab1-8ede-5e860bf07447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podUID="e2e7be2e-a36d-4ab1-8ede-5e860bf07447" Nov 12 20:52:14.441690 containerd[1962]: time="2024-11-12T20:52:14.441651355Z" level=info msg="StopPodSandbox for \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\"" Nov 12 20:52:14.442553 containerd[1962]: time="2024-11-12T20:52:14.441704908Z" level=info msg="Container to stop \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:52:14.442553 containerd[1962]: time="2024-11-12T20:52:14.441721159Z" level=info msg="Container to stop \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:52:14.442553 containerd[1962]: time="2024-11-12T20:52:14.441733640Z" level=info msg="Container to stop \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:52:14.451992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169-shm.mount: Deactivated successfully. Nov 12 20:52:14.461286 systemd[1]: cri-containerd-65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169.scope: Deactivated successfully. Nov 12 20:52:14.505854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169-rootfs.mount: Deactivated successfully. Nov 12 20:52:14.526575 containerd[1962]: time="2024-11-12T20:52:14.526450982Z" level=info msg="shim disconnected" id=65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169 namespace=k8s.io Nov 12 20:52:14.528136 containerd[1962]: time="2024-11-12T20:52:14.527924447Z" level=warning msg="cleaning up after shim disconnected" id=65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169 namespace=k8s.io Nov 12 20:52:14.528136 containerd[1962]: time="2024-11-12T20:52:14.527955527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:14.575837 containerd[1962]: time="2024-11-12T20:52:14.575617529Z" level=info msg="TearDown network for sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" successfully" Nov 12 20:52:14.575837 containerd[1962]: time="2024-11-12T20:52:14.575658056Z" level=info msg="StopPodSandbox for \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" returns successfully" Nov 12 20:52:14.693185 kubelet[3519]: I1112 20:52:14.692904 3519 topology_manager.go:215] "Topology Admit Handler" podUID="f3cc475d-dbcf-4f95-b73e-9101c8fe515d" podNamespace="calico-system" podName="calico-node-kc5wm" Nov 12 20:52:14.699204 kubelet[3519]: E1112 20:52:14.698792 3519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="flexvol-driver" Nov 12 20:52:14.699204 kubelet[3519]: E1112 20:52:14.698879 3519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.699204 kubelet[3519]: E1112 20:52:14.698897 3519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="install-cni" Nov 12 20:52:14.699204 kubelet[3519]: E1112 20:52:14.698905 3519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.705128 kubelet[3519]: I1112 20:52:14.704736 3519 memory_manager.go:354] "RemoveStaleState removing state" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.705128 kubelet[3519]: I1112 20:52:14.704785 3519 memory_manager.go:354] "RemoveStaleState removing state" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.705128 kubelet[3519]: E1112 20:52:14.704858 3519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.705128 kubelet[3519]: I1112 20:52:14.704898 3519 memory_manager.go:354] "RemoveStaleState removing state" podUID="42215328-2581-4c3c-973d-7929b354c338" containerName="calico-node" Nov 12 20:52:14.747845 systemd[1]: Created slice kubepods-besteffort-podf3cc475d_dbcf_4f95_b73e_9101c8fe515d.slice - libcontainer container kubepods-besteffort-podf3cc475d_dbcf_4f95_b73e_9101c8fe515d.slice. Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764028 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-lib-modules\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764215 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-lib-calico\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764249 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-flexvol-driver-host\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764270 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-log-dir\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764291 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-bin-dir\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.765908 kubelet[3519]: I1112 20:52:14.764327 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wscts\" (UniqueName: \"kubernetes.io/projected/42215328-2581-4c3c-973d-7929b354c338-kube-api-access-wscts\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.766405 kubelet[3519]: I1112 20:52:14.764347 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-net-dir\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.766405 kubelet[3519]: I1112 20:52:14.764375 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42215328-2581-4c3c-973d-7929b354c338-tigera-ca-bundle\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.767526 kubelet[3519]: I1112 20:52:14.766668 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42215328-2581-4c3c-973d-7929b354c338-node-certs\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.767526 kubelet[3519]: I1112 20:52:14.767305 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-xtables-lock\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.767526 kubelet[3519]: I1112 20:52:14.767336 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-policysync\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.767526 kubelet[3519]: I1112 20:52:14.767364 3519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-run-calico\") pod \"42215328-2581-4c3c-973d-7929b354c338\" (UID: \"42215328-2581-4c3c-973d-7929b354c338\") " Nov 12 20:52:14.780097 kubelet[3519]: I1112 20:52:14.773558 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.780097 kubelet[3519]: I1112 20:52:14.779363 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.780097 kubelet[3519]: I1112 20:52:14.779396 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.780097 kubelet[3519]: I1112 20:52:14.779421 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.781758 kubelet[3519]: I1112 20:52:14.781276 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.784832 kubelet[3519]: I1112 20:52:14.775801 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.794701 kubelet[3519]: I1112 20:52:14.794403 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42215328-2581-4c3c-973d-7929b354c338-node-certs" (OuterVolumeSpecName: "node-certs") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:52:14.798897 kubelet[3519]: I1112 20:52:14.795885 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.799584 systemd[1]: var-lib-kubelet-pods-42215328\x2d2581\x2d4c3c\x2d973d\x2d7929b354c338-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Nov 12 20:52:14.802938 kubelet[3519]: I1112 20:52:14.802905 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.803162 kubelet[3519]: I1112 20:52:14.803143 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-policysync" (OuterVolumeSpecName: "policysync") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:52:14.803875 kubelet[3519]: I1112 20:52:14.803261 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42215328-2581-4c3c-973d-7929b354c338-kube-api-access-wscts" (OuterVolumeSpecName: "kube-api-access-wscts") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "kube-api-access-wscts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:52:14.807824 systemd[1]: var-lib-kubelet-pods-42215328\x2d2581\x2d4c3c\x2d973d\x2d7929b354c338-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwscts.mount: Deactivated successfully. Nov 12 20:52:14.815652 kubelet[3519]: I1112 20:52:14.814281 3519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42215328-2581-4c3c-973d-7929b354c338-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "42215328-2581-4c3c-973d-7929b354c338" (UID: "42215328-2581-4c3c-973d-7929b354c338"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:52:14.817549 systemd[1]: var-lib-kubelet-pods-42215328\x2d2581\x2d4c3c\x2d973d\x2d7929b354c338-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Nov 12 20:52:14.868149 kubelet[3519]: I1112 20:52:14.867867 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-node-certs\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.868149 kubelet[3519]: I1112 20:52:14.867920 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-cni-log-dir\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.868149 kubelet[3519]: I1112 20:52:14.867948 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-policysync\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.868149 kubelet[3519]: I1112 20:52:14.867975 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-tigera-ca-bundle\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869522 kubelet[3519]: I1112 20:52:14.869488 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2mvt\" (UniqueName: \"kubernetes.io/projected/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-kube-api-access-q2mvt\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869616 kubelet[3519]: I1112 20:52:14.869567 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-var-lib-calico\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869616 kubelet[3519]: I1112 20:52:14.869600 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-lib-modules\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869709 kubelet[3519]: I1112 20:52:14.869630 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-var-run-calico\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869709 kubelet[3519]: I1112 20:52:14.869681 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-cni-net-dir\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869839 kubelet[3519]: I1112 20:52:14.869711 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-flexvol-driver-host\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869839 kubelet[3519]: I1112 20:52:14.869736 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-cni-bin-dir\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.869839 kubelet[3519]: I1112 20:52:14.869764 3519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3cc475d-dbcf-4f95-b73e-9101c8fe515d-xtables-lock\") pod \"calico-node-kc5wm\" (UID: \"f3cc475d-dbcf-4f95-b73e-9101c8fe515d\") " pod="calico-system/calico-node-kc5wm" Nov 12 20:52:14.872865 kubelet[3519]: I1112 20:52:14.872824 3519 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-lib-modules\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.872865 kubelet[3519]: I1112 20:52:14.872865 3519 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-lib-calico\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872880 3519 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-log-dir\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872892 3519 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-bin-dir\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872903 3519 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-flexvol-driver-host\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872918 3519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wscts\" (UniqueName: \"kubernetes.io/projected/42215328-2581-4c3c-973d-7929b354c338-kube-api-access-wscts\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872931 3519 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-cni-net-dir\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872943 3519 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42215328-2581-4c3c-973d-7929b354c338-tigera-ca-bundle\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872956 3519 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-xtables-lock\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873176 kubelet[3519]: I1112 20:52:14.872968 3519 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42215328-2581-4c3c-973d-7929b354c338-node-certs\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873745 kubelet[3519]: I1112 20:52:14.872978 3519 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-policysync\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.873745 kubelet[3519]: I1112 20:52:14.873131 3519 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42215328-2581-4c3c-973d-7929b354c338-var-run-calico\") on node \"ip-172-31-18-222\" DevicePath \"\"" Nov 12 20:52:14.919648 kubelet[3519]: I1112 20:52:14.919605 3519 scope.go:117] "RemoveContainer" containerID="4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd" Nov 12 20:52:14.922097 containerd[1962]: time="2024-11-12T20:52:14.921733727Z" level=info msg="RemoveContainer for \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\"" Nov 12 20:52:14.928468 containerd[1962]: time="2024-11-12T20:52:14.928060991Z" level=info msg="RemoveContainer for \"4ce2e52aca1f17ab6dbfc5ab77185e17413fdfe76e0883bd0b661572cd9e47fd\" returns successfully" Nov 12 20:52:14.928601 kubelet[3519]: I1112 20:52:14.928364 3519 scope.go:117] "RemoveContainer" containerID="f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f" Nov 12 20:52:14.932825 containerd[1962]: time="2024-11-12T20:52:14.932349168Z" level=info msg="RemoveContainer for \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\"" Nov 12 20:52:14.935488 systemd[1]: Removed slice kubepods-besteffort-pod42215328_2581_4c3c_973d_7929b354c338.slice - libcontainer container kubepods-besteffort-pod42215328_2581_4c3c_973d_7929b354c338.slice. Nov 12 20:52:14.943583 containerd[1962]: time="2024-11-12T20:52:14.943397850Z" level=info msg="RemoveContainer for \"f7f5349e60df008be81522856f41bee479eb8ec5e2a294ae630299ee3770ad2f\" returns successfully" Nov 12 20:52:14.945092 kubelet[3519]: I1112 20:52:14.943840 3519 scope.go:117] "RemoveContainer" containerID="a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c" Nov 12 20:52:14.956828 containerd[1962]: time="2024-11-12T20:52:14.956263123Z" level=info msg="RemoveContainer for \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\"" Nov 12 20:52:14.968501 containerd[1962]: time="2024-11-12T20:52:14.968435615Z" level=info msg="RemoveContainer for \"a2a0d2017f845443f9348dfeb849aad23f467f4e954bae04be13c7414fef642c\" returns successfully" Nov 12 20:52:15.059980 containerd[1962]: time="2024-11-12T20:52:15.059922281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kc5wm,Uid:f3cc475d-dbcf-4f95-b73e-9101c8fe515d,Namespace:calico-system,Attempt:0,}" Nov 12 20:52:15.128769 containerd[1962]: time="2024-11-12T20:52:15.128317982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:15.128769 containerd[1962]: time="2024-11-12T20:52:15.128399883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:15.128769 containerd[1962]: time="2024-11-12T20:52:15.128416779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:15.129321 containerd[1962]: time="2024-11-12T20:52:15.128705939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:15.199773 systemd[1]: Started cri-containerd-b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585.scope - libcontainer container b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585. Nov 12 20:52:15.269814 kubelet[3519]: I1112 20:52:15.269779 3519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42215328-2581-4c3c-973d-7929b354c338" path="/var/lib/kubelet/pods/42215328-2581-4c3c-973d-7929b354c338/volumes" Nov 12 20:52:15.334155 containerd[1962]: time="2024-11-12T20:52:15.334024487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kc5wm,Uid:f3cc475d-dbcf-4f95-b73e-9101c8fe515d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\"" Nov 12 20:52:15.339824 containerd[1962]: time="2024-11-12T20:52:15.339783742Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:52:15.363745 containerd[1962]: time="2024-11-12T20:52:15.363634388Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264\"" Nov 12 20:52:15.366674 containerd[1962]: time="2024-11-12T20:52:15.365691324Z" level=info msg="StartContainer for \"b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264\"" Nov 12 20:52:15.414627 systemd[1]: Started cri-containerd-b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264.scope - libcontainer container b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264. Nov 12 20:52:15.495122 containerd[1962]: time="2024-11-12T20:52:15.494231961Z" level=info msg="StartContainer for \"b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264\" returns successfully" Nov 12 20:52:15.619499 systemd[1]: cri-containerd-b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264.scope: Deactivated successfully. Nov 12 20:52:15.667833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264-rootfs.mount: Deactivated successfully. Nov 12 20:52:15.694438 containerd[1962]: time="2024-11-12T20:52:15.694303508Z" level=info msg="shim disconnected" id=b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264 namespace=k8s.io Nov 12 20:52:15.694438 containerd[1962]: time="2024-11-12T20:52:15.694432395Z" level=warning msg="cleaning up after shim disconnected" id=b8ed1e9707b263022f269989cc4a2a7af9700339381413122b5062ae56f86264 namespace=k8s.io Nov 12 20:52:15.694438 containerd[1962]: time="2024-11-12T20:52:15.694449046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:15.940156 containerd[1962]: time="2024-11-12T20:52:15.939216650Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:52:15.974755 containerd[1962]: time="2024-11-12T20:52:15.974665582Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80\"" Nov 12 20:52:15.978741 containerd[1962]: time="2024-11-12T20:52:15.976353028Z" level=info msg="StartContainer for \"52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80\"" Nov 12 20:52:16.031322 systemd[1]: Started cri-containerd-52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80.scope - libcontainer container 52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80. Nov 12 20:52:16.076444 containerd[1962]: time="2024-11-12T20:52:16.076361905Z" level=info msg="StartContainer for \"52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80\" returns successfully" Nov 12 20:52:16.793681 systemd[1]: Started sshd@18-172.31.18.222:22-139.178.89.65:41908.service - OpenSSH per-connection server daemon (139.178.89.65:41908). Nov 12 20:52:17.097295 sshd[5412]: Accepted publickey for core from 139.178.89.65 port 41908 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:17.100679 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:17.111218 systemd-logind[1947]: New session 19 of user core. Nov 12 20:52:17.117453 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:52:17.793226 sshd[5412]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:17.801545 systemd[1]: sshd@18-172.31.18.222:22-139.178.89.65:41908.service: Deactivated successfully. Nov 12 20:52:17.807545 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:52:17.810876 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:52:17.815592 systemd-logind[1947]: Removed session 19. Nov 12 20:52:18.321206 systemd[1]: cri-containerd-52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80.scope: Deactivated successfully. Nov 12 20:52:18.384672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80-rootfs.mount: Deactivated successfully. Nov 12 20:52:18.398366 containerd[1962]: time="2024-11-12T20:52:18.398299800Z" level=info msg="shim disconnected" id=52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80 namespace=k8s.io Nov 12 20:52:18.398366 containerd[1962]: time="2024-11-12T20:52:18.398358339Z" level=warning msg="cleaning up after shim disconnected" id=52a7db45524f1c196a86599f9d06c1b7b10f03c137c0a6d65445c775ed1baa80 namespace=k8s.io Nov 12 20:52:18.398366 containerd[1962]: time="2024-11-12T20:52:18.398369782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:52:19.030857 containerd[1962]: time="2024-11-12T20:52:19.030219531Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:52:19.077246 containerd[1962]: time="2024-11-12T20:52:19.074250952Z" level=info msg="CreateContainer within sandbox \"b997dd548c726dab1ceb543eff4c8458223c04329df96f7ddbd8cf0b3c30e585\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a\"" Nov 12 20:52:19.080050 containerd[1962]: time="2024-11-12T20:52:19.079995424Z" level=info msg="StartContainer for \"c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a\"" Nov 12 20:52:19.090731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780015891.mount: Deactivated successfully. Nov 12 20:52:19.171027 systemd[1]: Started cri-containerd-c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a.scope - libcontainer container c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a. Nov 12 20:52:19.266197 containerd[1962]: time="2024-11-12T20:52:19.266153494Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:19.270391 containerd[1962]: time="2024-11-12T20:52:19.266605305Z" level=info msg="StartContainer for \"c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a\" returns successfully" Nov 12 20:52:19.273976 containerd[1962]: time="2024-11-12T20:52:19.273934666Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:19.418751 containerd[1962]: time="2024-11-12T20:52:19.418689135Z" level=error msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" failed" error="failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:19.419632 kubelet[3519]: E1112 20:52:19.418964 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:19.419632 kubelet[3519]: E1112 20:52:19.419027 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8"} Nov 12 20:52:19.419632 kubelet[3519]: E1112 20:52:19.419106 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:19.419632 kubelet[3519]: E1112 20:52:19.419140 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5610ae2-edd8-4453-84a0-6212f12ff6f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podUID="e5610ae2-edd8-4453-84a0-6212f12ff6f6" Nov 12 20:52:19.456176 containerd[1962]: time="2024-11-12T20:52:19.455988609Z" level=error msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" failed" error="failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:52:19.456448 kubelet[3519]: E1112 20:52:19.456403 3519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:19.457241 kubelet[3519]: E1112 20:52:19.456465 3519 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898"} Nov 12 20:52:19.457241 kubelet[3519]: E1112 20:52:19.456508 3519 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:52:19.457241 kubelet[3519]: E1112 20:52:19.456542 3519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45411fe7-dc80-47ef-ab22-476dbc16d243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podUID="45411fe7-dc80-47ef-ab22-476dbc16d243" Nov 12 20:52:20.017376 kubelet[3519]: I1112 20:52:20.016648 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kc5wm" podStartSLOduration=6.016621672 podStartE2EDuration="6.016621672s" podCreationTimestamp="2024-11-12 20:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:52:20.01335447 +0000 UTC m=+91.033207986" watchObservedRunningTime="2024-11-12 20:52:20.016621672 +0000 UTC m=+91.036475213" Nov 12 20:52:20.265546 containerd[1962]: time="2024-11-12T20:52:20.264672485Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.359 [INFO][5576] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.360 [INFO][5576] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" iface="eth0" netns="/var/run/netns/cni-73e9a31c-8dae-a718-0d2b-34ad96eb8d79" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.361 [INFO][5576] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" iface="eth0" netns="/var/run/netns/cni-73e9a31c-8dae-a718-0d2b-34ad96eb8d79" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.361 [INFO][5576] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" iface="eth0" netns="/var/run/netns/cni-73e9a31c-8dae-a718-0d2b-34ad96eb8d79" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.361 [INFO][5576] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.362 [INFO][5576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.390 [INFO][5589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.390 [INFO][5589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.390 [INFO][5589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.397 [WARNING][5589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.397 [INFO][5589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.399 [INFO][5589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:20.402970 containerd[1962]: 2024-11-12 20:52:20.400 [INFO][5576] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:20.404562 containerd[1962]: time="2024-11-12T20:52:20.403731293Z" level=info msg="TearDown network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" successfully" Nov 12 20:52:20.404562 containerd[1962]: time="2024-11-12T20:52:20.403767320Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" returns successfully" Nov 12 20:52:20.405511 containerd[1962]: time="2024-11-12T20:52:20.405474793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xw4fv,Uid:579a082e-68d8-4be8-b0d9-8983607906fe,Namespace:calico-system,Attempt:1,}" Nov 12 20:52:20.408099 systemd[1]: run-netns-cni\x2d73e9a31c\x2d8dae\x2da718\x2d0d2b\x2d34ad96eb8d79.mount: Deactivated successfully. Nov 12 20:52:20.688025 systemd-networkd[1812]: cali0fdb21aa7bc: Link UP Nov 12 20:52:20.692482 (udev-worker)[5617]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:52:20.693994 systemd-networkd[1812]: cali0fdb21aa7bc: Gained carrier Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.479 [INFO][5596] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.491 [INFO][5596] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0 csi-node-driver- calico-system 579a082e-68d8-4be8-b0d9-8983607906fe 1059 0 2024-11-12 20:51:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-222 csi-node-driver-xw4fv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0fdb21aa7bc [] []}} ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.492 [INFO][5596] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.528 [INFO][5607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" HandleID="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.556 [INFO][5607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" HandleID="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-222", "pod":"csi-node-driver-xw4fv", "timestamp":"2024-11-12 20:52:20.528605018 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.557 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.557 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.557 [INFO][5607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.560 [INFO][5607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.587 [INFO][5607] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.604 [INFO][5607] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.610 [INFO][5607] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.617 [INFO][5607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.618 [INFO][5607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.620 [INFO][5607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88 Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.632 [INFO][5607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.647 [INFO][5607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.65/26] block=192.168.88.64/26 handle="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.647 [INFO][5607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.65/26] handle="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" host="ip-172-31-18-222" Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.647 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:20.735059 containerd[1962]: 2024-11-12 20:52:20.647 [INFO][5607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.65/26] IPv6=[] ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" HandleID="k8s-pod-network.a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.652 [INFO][5596] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"579a082e-68d8-4be8-b0d9-8983607906fe", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"csi-node-driver-xw4fv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdb21aa7bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.652 [INFO][5596] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.65/32] ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.652 [INFO][5596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fdb21aa7bc ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.691 [INFO][5596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.695 [INFO][5596] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"579a082e-68d8-4be8-b0d9-8983607906fe", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88", Pod:"csi-node-driver-xw4fv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdb21aa7bc", MAC:"d2:70:c9:ff:52:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:20.751102 containerd[1962]: 2024-11-12 20:52:20.729 [INFO][5596] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88" Namespace="calico-system" Pod="csi-node-driver-xw4fv" WorkloadEndpoint="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:20.820144 containerd[1962]: time="2024-11-12T20:52:20.820030507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:20.820328 containerd[1962]: time="2024-11-12T20:52:20.820136050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:20.820328 containerd[1962]: time="2024-11-12T20:52:20.820156834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:20.820827 containerd[1962]: time="2024-11-12T20:52:20.820310039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:20.856100 systemd[1]: run-containerd-runc-k8s.io-a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88-runc.q9fYLk.mount: Deactivated successfully. Nov 12 20:52:20.868349 systemd[1]: Started cri-containerd-a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88.scope - libcontainer container a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88. Nov 12 20:52:20.910770 containerd[1962]: time="2024-11-12T20:52:20.910727720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xw4fv,Uid:579a082e-68d8-4be8-b0d9-8983607906fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88\"" Nov 12 20:52:20.914395 containerd[1962]: time="2024-11-12T20:52:20.913544033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:52:21.265549 containerd[1962]: time="2024-11-12T20:52:21.265474153Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.351 [INFO][5705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.351 [INFO][5705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" iface="eth0" netns="/var/run/netns/cni-c3bea909-aa81-7614-ca94-69e715e212eb" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.351 [INFO][5705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" iface="eth0" netns="/var/run/netns/cni-c3bea909-aa81-7614-ca94-69e715e212eb" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.352 [INFO][5705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" iface="eth0" netns="/var/run/netns/cni-c3bea909-aa81-7614-ca94-69e715e212eb" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.352 [INFO][5705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.352 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.483 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.483 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.483 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.506 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.506 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.514 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:21.533169 containerd[1962]: 2024-11-12 20:52:21.526 [INFO][5705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:21.533169 containerd[1962]: time="2024-11-12T20:52:21.531969098Z" level=info msg="TearDown network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" successfully" Nov 12 20:52:21.533169 containerd[1962]: time="2024-11-12T20:52:21.532019070Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" returns successfully" Nov 12 20:52:21.538332 containerd[1962]: time="2024-11-12T20:52:21.537321135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-hqszx,Uid:4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:52:21.539392 systemd[1]: run-netns-cni\x2dc3bea909\x2daa81\x2d7614\x2dca94\x2d69e715e212eb.mount: Deactivated successfully. Nov 12 20:52:21.938861 (udev-worker)[5616]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:52:21.944749 systemd-networkd[1812]: cali7ebfe6f3ffe: Link UP Nov 12 20:52:21.945223 systemd-networkd[1812]: cali7ebfe6f3ffe: Gained carrier Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.694 [INFO][5760] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.717 [INFO][5760] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0 calico-apiserver-7466d5c5cc- calico-apiserver 4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6 1068 0 2024-11-12 20:51:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7466d5c5cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-222 calico-apiserver-7466d5c5cc-hqszx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ebfe6f3ffe [] []}} ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.718 [INFO][5760] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.858 [INFO][5776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" HandleID="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.876 [INFO][5776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" HandleID="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-222", "pod":"calico-apiserver-7466d5c5cc-hqszx", "timestamp":"2024-11-12 20:52:21.858710646 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.876 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.876 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.876 [INFO][5776] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.880 [INFO][5776] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.888 [INFO][5776] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.895 [INFO][5776] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.898 [INFO][5776] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.901 [INFO][5776] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.901 [INFO][5776] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.903 [INFO][5776] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.911 [INFO][5776] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.925 [INFO][5776] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.66/26] block=192.168.88.64/26 handle="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.925 [INFO][5776] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.66/26] handle="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" host="ip-172-31-18-222" Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.926 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:21.988954 containerd[1962]: 2024-11-12 20:52:21.926 [INFO][5776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.66/26] IPv6=[] ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" HandleID="k8s-pod-network.f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.934 [INFO][5760] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"calico-apiserver-7466d5c5cc-hqszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ebfe6f3ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.936 [INFO][5760] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.66/32] ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.936 [INFO][5760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ebfe6f3ffe ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.943 [INFO][5760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.945 [INFO][5760] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d", Pod:"calico-apiserver-7466d5c5cc-hqszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ebfe6f3ffe", MAC:"72:16:78:e7:c5:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:21.990704 containerd[1962]: 2024-11-12 20:52:21.984 [INFO][5760] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-hqszx" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:22.056932 containerd[1962]: time="2024-11-12T20:52:22.056735588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:22.059613 containerd[1962]: time="2024-11-12T20:52:22.057958890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:22.059613 containerd[1962]: time="2024-11-12T20:52:22.057988255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:22.059613 containerd[1962]: time="2024-11-12T20:52:22.058396037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:22.111639 systemd[1]: Started cri-containerd-f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d.scope - libcontainer container f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d. Nov 12 20:52:22.300086 systemd-networkd[1812]: cali0fdb21aa7bc: Gained IPv6LL Nov 12 20:52:22.419097 containerd[1962]: time="2024-11-12T20:52:22.418757255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-hqszx,Uid:4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d\"" Nov 12 20:52:22.823723 containerd[1962]: time="2024-11-12T20:52:22.821273038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:22.829296 containerd[1962]: time="2024-11-12T20:52:22.829238026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:52:22.832307 containerd[1962]: time="2024-11-12T20:52:22.832250523Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:22.842158 systemd[1]: Started sshd@19-172.31.18.222:22-139.178.89.65:34548.service - OpenSSH per-connection server daemon (139.178.89.65:34548). Nov 12 20:52:22.851284 containerd[1962]: time="2024-11-12T20:52:22.851136080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:22.862579 containerd[1962]: time="2024-11-12T20:52:22.860897252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.947313632s" Nov 12 20:52:22.862579 containerd[1962]: time="2024-11-12T20:52:22.860949692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:52:22.878271 containerd[1962]: time="2024-11-12T20:52:22.877251549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:52:22.887187 containerd[1962]: time="2024-11-12T20:52:22.884664199Z" level=info msg="CreateContainer within sandbox \"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:52:22.969099 containerd[1962]: time="2024-11-12T20:52:22.968948180Z" level=info msg="CreateContainer within sandbox \"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"acf41c776eab07438f80552db859eebdb2ce58ebfba3ce3f8aedeb1bb3e592fd\"" Nov 12 20:52:22.973420 containerd[1962]: time="2024-11-12T20:52:22.973373203Z" level=info msg="StartContainer for \"acf41c776eab07438f80552db859eebdb2ce58ebfba3ce3f8aedeb1bb3e592fd\"" Nov 12 20:52:23.113222 sshd[5895]: Accepted publickey for core from 139.178.89.65 port 34548 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:23.125105 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:23.138096 kernel: bpftool[5934]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:52:23.151046 systemd-logind[1947]: New session 20 of user core. Nov 12 20:52:23.162370 systemd[1]: Started cri-containerd-acf41c776eab07438f80552db859eebdb2ce58ebfba3ce3f8aedeb1bb3e592fd.scope - libcontainer container acf41c776eab07438f80552db859eebdb2ce58ebfba3ce3f8aedeb1bb3e592fd. Nov 12 20:52:23.164588 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:52:23.245055 containerd[1962]: time="2024-11-12T20:52:23.245010410Z" level=info msg="StartContainer for \"acf41c776eab07438f80552db859eebdb2ce58ebfba3ce3f8aedeb1bb3e592fd\" returns successfully" Nov 12 20:52:23.698271 systemd-networkd[1812]: vxlan.calico: Link UP Nov 12 20:52:23.698283 systemd-networkd[1812]: vxlan.calico: Gained carrier Nov 12 20:52:23.765078 systemd-networkd[1812]: cali7ebfe6f3ffe: Gained IPv6LL Nov 12 20:52:24.132348 sshd[5895]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:24.138612 systemd[1]: sshd@19-172.31.18.222:22-139.178.89.65:34548.service: Deactivated successfully. Nov 12 20:52:24.142145 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:52:24.146708 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:52:24.149028 systemd-logind[1947]: Removed session 20. Nov 12 20:52:25.271102 containerd[1962]: time="2024-11-12T20:52:25.270979225Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:25.557507 systemd-networkd[1812]: vxlan.calico: Gained IPv6LL Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.527 [INFO][6048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.529 [INFO][6048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" iface="eth0" netns="/var/run/netns/cni-d00bd812-1498-13d9-67a5-3ebad6e6c508" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.530 [INFO][6048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" iface="eth0" netns="/var/run/netns/cni-d00bd812-1498-13d9-67a5-3ebad6e6c508" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.531 [INFO][6048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" iface="eth0" netns="/var/run/netns/cni-d00bd812-1498-13d9-67a5-3ebad6e6c508" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.531 [INFO][6048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.531 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.630 [INFO][6054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.631 [INFO][6054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.632 [INFO][6054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.650 [WARNING][6054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.651 [INFO][6054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.654 [INFO][6054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:25.669465 containerd[1962]: 2024-11-12 20:52:25.661 [INFO][6048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:25.674138 containerd[1962]: time="2024-11-12T20:52:25.670901773Z" level=info msg="TearDown network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" successfully" Nov 12 20:52:25.674138 containerd[1962]: time="2024-11-12T20:52:25.671145995Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" returns successfully" Nov 12 20:52:25.673193 systemd[1]: run-netns-cni\x2dd00bd812\x2d1498\x2d13d9\x2d67a5\x2d3ebad6e6c508.mount: Deactivated successfully. Nov 12 20:52:25.677355 containerd[1962]: time="2024-11-12T20:52:25.676220621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576c7b7594-qkhxb,Uid:e2e7be2e-a36d-4ab1-8ede-5e860bf07447,Namespace:calico-system,Attempt:1,}" Nov 12 20:52:26.030454 systemd-networkd[1812]: cali5183fea9279: Link UP Nov 12 20:52:26.030772 systemd-networkd[1812]: cali5183fea9279: Gained carrier Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.837 [INFO][6063] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0 calico-kube-controllers-576c7b7594- calico-system e2e7be2e-a36d-4ab1-8ede-5e860bf07447 1084 0 2024-11-12 20:51:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:576c7b7594 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-222 calico-kube-controllers-576c7b7594-qkhxb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5183fea9279 [] []}} ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.838 [INFO][6063] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.948 [INFO][6073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" HandleID="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.964 [INFO][6073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" HandleID="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003baf20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-222", "pod":"calico-kube-controllers-576c7b7594-qkhxb", "timestamp":"2024-11-12 20:52:25.948682639 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.964 [INFO][6073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.964 [INFO][6073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.964 [INFO][6073] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.967 [INFO][6073] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.976 [INFO][6073] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.986 [INFO][6073] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.990 [INFO][6073] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.993 [INFO][6073] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.994 [INFO][6073] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:25.997 [INFO][6073] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66 Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:26.004 [INFO][6073] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:26.018 [INFO][6073] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.67/26] block=192.168.88.64/26 handle="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:26.018 [INFO][6073] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.67/26] handle="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" host="ip-172-31-18-222" Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:26.018 [INFO][6073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:26.072534 containerd[1962]: 2024-11-12 20:52:26.018 [INFO][6073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.67/26] IPv6=[] ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" HandleID="k8s-pod-network.89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.022 [INFO][6063] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0", GenerateName:"calico-kube-controllers-576c7b7594-", Namespace:"calico-system", SelfLink:"", UID:"e2e7be2e-a36d-4ab1-8ede-5e860bf07447", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576c7b7594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"calico-kube-controllers-576c7b7594-qkhxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5183fea9279", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.023 [INFO][6063] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.67/32] ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.023 [INFO][6063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5183fea9279 ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.032 [INFO][6063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.033 [INFO][6063] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0", GenerateName:"calico-kube-controllers-576c7b7594-", Namespace:"calico-system", SelfLink:"", UID:"e2e7be2e-a36d-4ab1-8ede-5e860bf07447", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576c7b7594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66", Pod:"calico-kube-controllers-576c7b7594-qkhxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5183fea9279", MAC:"46:ed:3f:ba:85:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:26.077409 containerd[1962]: 2024-11-12 20:52:26.066 [INFO][6063] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66" Namespace="calico-system" Pod="calico-kube-controllers-576c7b7594-qkhxb" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:26.268045 containerd[1962]: time="2024-11-12T20:52:26.267296966Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:52:26.325521 containerd[1962]: time="2024-11-12T20:52:26.324366643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:26.325521 containerd[1962]: time="2024-11-12T20:52:26.324442401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:26.325521 containerd[1962]: time="2024-11-12T20:52:26.324461746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:26.325521 containerd[1962]: time="2024-11-12T20:52:26.324615880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:26.381627 containerd[1962]: time="2024-11-12T20:52:26.381055002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:26.412421 containerd[1962]: time="2024-11-12T20:52:26.412190712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:52:26.417465 containerd[1962]: time="2024-11-12T20:52:26.416661966Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:26.433668 containerd[1962]: time="2024-11-12T20:52:26.433619491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:26.435725 containerd[1962]: time="2024-11-12T20:52:26.435669251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.558366837s" Nov 12 20:52:26.436101 containerd[1962]: time="2024-11-12T20:52:26.436037136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:52:26.473383 systemd[1]: Started cri-containerd-89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66.scope - libcontainer container 89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66. Nov 12 20:52:26.501656 containerd[1962]: time="2024-11-12T20:52:26.501617360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:52:26.505961 containerd[1962]: time="2024-11-12T20:52:26.505898754Z" level=info msg="CreateContainer within sandbox \"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:52:26.536465 containerd[1962]: time="2024-11-12T20:52:26.535808574Z" level=info msg="CreateContainer within sandbox \"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6a846f86dc23ceb61455c586b3d55bb33115114de1c8a6972f2cea8950241ce\"" Nov 12 20:52:26.538755 containerd[1962]: time="2024-11-12T20:52:26.538718023Z" level=info msg="StartContainer for \"c6a846f86dc23ceb61455c586b3d55bb33115114de1c8a6972f2cea8950241ce\"" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.410 [INFO][6115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.410 [INFO][6115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" iface="eth0" netns="/var/run/netns/cni-c62c86c8-4b4b-66cb-8300-cacf0e90c462" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.414 [INFO][6115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" iface="eth0" netns="/var/run/netns/cni-c62c86c8-4b4b-66cb-8300-cacf0e90c462" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.414 [INFO][6115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" iface="eth0" netns="/var/run/netns/cni-c62c86c8-4b4b-66cb-8300-cacf0e90c462" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.414 [INFO][6115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.414 [INFO][6115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.534 [INFO][6141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.535 [INFO][6141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.535 [INFO][6141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.557 [WARNING][6141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.557 [INFO][6141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.563 [INFO][6141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:26.591201 containerd[1962]: 2024-11-12 20:52:26.581 [INFO][6115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:26.593856 containerd[1962]: time="2024-11-12T20:52:26.591984422Z" level=info msg="TearDown network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" successfully" Nov 12 20:52:26.593856 containerd[1962]: time="2024-11-12T20:52:26.592019290Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" returns successfully" Nov 12 20:52:26.594482 containerd[1962]: time="2024-11-12T20:52:26.594339078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhn5h,Uid:b9630a59-9ff8-489c-b4ee-f423326fdc24,Namespace:kube-system,Attempt:1,}" Nov 12 20:52:26.632786 containerd[1962]: time="2024-11-12T20:52:26.632603254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576c7b7594-qkhxb,Uid:e2e7be2e-a36d-4ab1-8ede-5e860bf07447,Namespace:calico-system,Attempt:1,} returns sandbox id \"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66\"" Nov 12 20:52:26.663381 systemd[1]: Started cri-containerd-c6a846f86dc23ceb61455c586b3d55bb33115114de1c8a6972f2cea8950241ce.scope - libcontainer container c6a846f86dc23ceb61455c586b3d55bb33115114de1c8a6972f2cea8950241ce. Nov 12 20:52:26.677736 systemd[1]: run-netns-cni\x2dc62c86c8\x2d4b4b\x2d66cb\x2d8300\x2dcacf0e90c462.mount: Deactivated successfully. Nov 12 20:52:26.958173 containerd[1962]: time="2024-11-12T20:52:26.956502581Z" level=info msg="StartContainer for \"c6a846f86dc23ceb61455c586b3d55bb33115114de1c8a6972f2cea8950241ce\" returns successfully" Nov 12 20:52:27.040861 systemd-networkd[1812]: calid9649997b03: Link UP Nov 12 20:52:27.042387 systemd-networkd[1812]: calid9649997b03: Gained carrier Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.833 [INFO][6180] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0 coredns-7db6d8ff4d- kube-system b9630a59-9ff8-489c-b4ee-f423326fdc24 1097 0 2024-11-12 20:51:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-222 coredns-7db6d8ff4d-zhn5h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9649997b03 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.833 [INFO][6180] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.943 [INFO][6199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" HandleID="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.968 [INFO][6199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" HandleID="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-222", "pod":"coredns-7db6d8ff4d-zhn5h", "timestamp":"2024-11-12 20:52:26.943567781 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.968 [INFO][6199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.968 [INFO][6199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.969 [INFO][6199] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.972 [INFO][6199] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.981 [INFO][6199] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.992 [INFO][6199] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:26.998 [INFO][6199] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.009 [INFO][6199] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.009 [INFO][6199] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.011 [INFO][6199] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928 Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.020 [INFO][6199] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.030 [INFO][6199] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.68/26] block=192.168.88.64/26 handle="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.031 [INFO][6199] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.68/26] handle="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" host="ip-172-31-18-222" Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.031 [INFO][6199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:27.080773 containerd[1962]: 2024-11-12 20:52:27.031 [INFO][6199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.68/26] IPv6=[] ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" HandleID="k8s-pod-network.9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.035 [INFO][6180] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9630a59-9ff8-489c-b4ee-f423326fdc24", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"coredns-7db6d8ff4d-zhn5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9649997b03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.035 [INFO][6180] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.68/32] ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.035 [INFO][6180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9649997b03 ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.042 [INFO][6180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.045 [INFO][6180] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9630a59-9ff8-489c-b4ee-f423326fdc24", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928", Pod:"coredns-7db6d8ff4d-zhn5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9649997b03", MAC:"96:1e:7b:ec:51:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:27.084599 containerd[1962]: 2024-11-12 20:52:27.078 [INFO][6180] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhn5h" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:27.186586 containerd[1962]: time="2024-11-12T20:52:27.185426122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:27.186586 containerd[1962]: time="2024-11-12T20:52:27.185516931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:27.186586 containerd[1962]: time="2024-11-12T20:52:27.185534644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:27.188214 containerd[1962]: time="2024-11-12T20:52:27.187626702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:27.222045 systemd-networkd[1812]: cali5183fea9279: Gained IPv6LL Nov 12 20:52:27.309844 systemd[1]: Started cri-containerd-9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928.scope - libcontainer container 9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928. Nov 12 20:52:27.327534 systemd[1]: run-containerd-runc-k8s.io-9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928-runc.lbmP2w.mount: Deactivated successfully. Nov 12 20:52:27.590492 containerd[1962]: time="2024-11-12T20:52:27.590350873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhn5h,Uid:b9630a59-9ff8-489c-b4ee-f423326fdc24,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928\"" Nov 12 20:52:27.617215 containerd[1962]: time="2024-11-12T20:52:27.616678963Z" level=info msg="CreateContainer within sandbox \"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:52:27.690022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571518900.mount: Deactivated successfully. Nov 12 20:52:27.710675 containerd[1962]: time="2024-11-12T20:52:27.710624951Z" level=info msg="CreateContainer within sandbox \"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"149e9f712b17a447b82fb555d634c01a5894861251a551c82ee3e016e4daf121\"" Nov 12 20:52:27.711431 containerd[1962]: time="2024-11-12T20:52:27.711343586Z" level=info msg="StartContainer for \"149e9f712b17a447b82fb555d634c01a5894861251a551c82ee3e016e4daf121\"" Nov 12 20:52:27.766184 systemd[1]: Started cri-containerd-149e9f712b17a447b82fb555d634c01a5894861251a551c82ee3e016e4daf121.scope - libcontainer container 149e9f712b17a447b82fb555d634c01a5894861251a551c82ee3e016e4daf121. Nov 12 20:52:27.822437 containerd[1962]: time="2024-11-12T20:52:27.822396300Z" level=info msg="StartContainer for \"149e9f712b17a447b82fb555d634c01a5894861251a551c82ee3e016e4daf121\" returns successfully" Nov 12 20:52:28.187480 kubelet[3519]: I1112 20:52:28.158302 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7466d5c5cc-hqszx" podStartSLOduration=71.092337554 podStartE2EDuration="1m15.158272339s" podCreationTimestamp="2024-11-12 20:51:13 +0000 UTC" firstStartedPulling="2024-11-12 20:52:22.425471395 +0000 UTC m=+93.445324910" lastFinishedPulling="2024-11-12 20:52:26.491406171 +0000 UTC m=+97.511259695" observedRunningTime="2024-11-12 20:52:27.185270046 +0000 UTC m=+98.205123580" watchObservedRunningTime="2024-11-12 20:52:28.158272339 +0000 UTC m=+99.178125870" Nov 12 20:52:28.565253 systemd-networkd[1812]: calid9649997b03: Gained IPv6LL Nov 12 20:52:28.589972 containerd[1962]: time="2024-11-12T20:52:28.588749763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:28.590815 containerd[1962]: time="2024-11-12T20:52:28.590627254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:52:28.593725 containerd[1962]: time="2024-11-12T20:52:28.593653238Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:28.606491 containerd[1962]: time="2024-11-12T20:52:28.606284533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:28.609316 containerd[1962]: time="2024-11-12T20:52:28.609186886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.107518991s" Nov 12 20:52:28.609316 containerd[1962]: time="2024-11-12T20:52:28.609239297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:52:28.611412 containerd[1962]: time="2024-11-12T20:52:28.610802032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:52:28.626250 containerd[1962]: time="2024-11-12T20:52:28.626097936Z" level=info msg="CreateContainer within sandbox \"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:52:28.772125 containerd[1962]: time="2024-11-12T20:52:28.771380360Z" level=info msg="CreateContainer within sandbox \"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"90c8664f2a9f1e9ccab6806b007aec5977e224bd69a0dd1b1ec4071ab9ade8ce\"" Nov 12 20:52:28.772302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779895828.mount: Deactivated successfully. Nov 12 20:52:28.777824 containerd[1962]: time="2024-11-12T20:52:28.777784496Z" level=info msg="StartContainer for \"90c8664f2a9f1e9ccab6806b007aec5977e224bd69a0dd1b1ec4071ab9ade8ce\"" Nov 12 20:52:28.877854 systemd[1]: Started cri-containerd-90c8664f2a9f1e9ccab6806b007aec5977e224bd69a0dd1b1ec4071ab9ade8ce.scope - libcontainer container 90c8664f2a9f1e9ccab6806b007aec5977e224bd69a0dd1b1ec4071ab9ade8ce. Nov 12 20:52:29.017282 containerd[1962]: time="2024-11-12T20:52:29.016980625Z" level=info msg="StartContainer for \"90c8664f2a9f1e9ccab6806b007aec5977e224bd69a0dd1b1ec4071ab9ade8ce\" returns successfully" Nov 12 20:52:29.159272 kubelet[3519]: I1112 20:52:29.158926 3519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:52:29.191691 systemd[1]: Started sshd@20-172.31.18.222:22-139.178.89.65:34728.service - OpenSSH per-connection server daemon (139.178.89.65:34728). Nov 12 20:52:29.255202 kubelet[3519]: I1112 20:52:29.255141 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zhn5h" podStartSLOduration=87.255119759 podStartE2EDuration="1m27.255119759s" podCreationTimestamp="2024-11-12 20:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:52:28.186345445 +0000 UTC m=+99.206198958" watchObservedRunningTime="2024-11-12 20:52:29.255119759 +0000 UTC m=+100.274973293" Nov 12 20:52:29.258426 kubelet[3519]: I1112 20:52:29.258368 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xw4fv" podStartSLOduration=68.560463086 podStartE2EDuration="1m16.258320348s" podCreationTimestamp="2024-11-12 20:51:13 +0000 UTC" firstStartedPulling="2024-11-12 20:52:20.912489276 +0000 UTC m=+91.932342799" lastFinishedPulling="2024-11-12 20:52:28.610346486 +0000 UTC m=+99.630200061" observedRunningTime="2024-11-12 20:52:29.254729748 +0000 UTC m=+100.274583281" watchObservedRunningTime="2024-11-12 20:52:29.258320348 +0000 UTC m=+100.278173883" Nov 12 20:52:29.542270 sshd[6361]: Accepted publickey for core from 139.178.89.65 port 34728 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:29.543421 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:29.569434 systemd-logind[1947]: New session 21 of user core. Nov 12 20:52:29.574560 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:52:30.357511 kubelet[3519]: I1112 20:52:30.356375 3519 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:52:30.387106 kubelet[3519]: I1112 20:52:30.386896 3519 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:52:30.951925 sshd[6361]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:30.960912 systemd[1]: sshd@20-172.31.18.222:22-139.178.89.65:34728.service: Deactivated successfully. Nov 12 20:52:30.963532 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:52:30.964803 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:52:30.969367 systemd-logind[1947]: Removed session 21. Nov 12 20:52:31.267939 containerd[1962]: time="2024-11-12T20:52:31.267149730Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.381 [INFO][6400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.381 [INFO][6400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" iface="eth0" netns="/var/run/netns/cni-b2c00426-41be-9154-9934-1b75438f87aa" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.383 [INFO][6400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" iface="eth0" netns="/var/run/netns/cni-b2c00426-41be-9154-9934-1b75438f87aa" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.384 [INFO][6400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" iface="eth0" netns="/var/run/netns/cni-b2c00426-41be-9154-9934-1b75438f87aa" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.384 [INFO][6400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.385 [INFO][6400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.416 [INFO][6406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.416 [INFO][6406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.416 [INFO][6406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.423 [WARNING][6406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.424 [INFO][6406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.425 [INFO][6406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:31.431819 containerd[1962]: 2024-11-12 20:52:31.428 [INFO][6400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:31.435115 containerd[1962]: time="2024-11-12T20:52:31.434839996Z" level=info msg="TearDown network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" successfully" Nov 12 20:52:31.435115 containerd[1962]: time="2024-11-12T20:52:31.434881062Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" returns successfully" Nov 12 20:52:31.439711 systemd[1]: run-netns-cni\x2db2c00426\x2d41be\x2d9154\x2d9934\x2d1b75438f87aa.mount: Deactivated successfully. Nov 12 20:52:31.445108 containerd[1962]: time="2024-11-12T20:52:31.444889103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-sz8kp,Uid:45411fe7-dc80-47ef-ab22-476dbc16d243,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:52:31.479180 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.88.64:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.88.64:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 8 cali0fdb21aa7bc [fe80::ecee:eeff:feee:eeee%4]:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 9 cali7ebfe6f3ffe [fe80::ecee:eeff:feee:eeee%5]:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 10 vxlan.calico [fe80::64f2:1ff:fe7f:ab21%6]:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 11 cali5183fea9279 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:52:31.482453 ntpd[1940]: 12 Nov 20:52:31 ntpd[1940]: Listen normally on 12 calid9649997b03 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:52:31.479271 ntpd[1940]: Listen normally on 8 cali0fdb21aa7bc [fe80::ecee:eeff:feee:eeee%4]:123 Nov 12 20:52:31.479327 ntpd[1940]: Listen normally on 9 cali7ebfe6f3ffe [fe80::ecee:eeff:feee:eeee%5]:123 Nov 12 20:52:31.479368 ntpd[1940]: Listen normally on 10 vxlan.calico [fe80::64f2:1ff:fe7f:ab21%6]:123 Nov 12 20:52:31.479408 ntpd[1940]: Listen normally on 11 cali5183fea9279 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:52:31.479460 ntpd[1940]: Listen normally on 12 calid9649997b03 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:52:31.716633 systemd-networkd[1812]: calie9b614912b7: Link UP Nov 12 20:52:31.723491 systemd-networkd[1812]: calie9b614912b7: Gained carrier Nov 12 20:52:31.727391 (udev-worker)[6434]: Network interface NamePolicy= disabled on kernel command line. Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.527 [INFO][6413] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0 calico-apiserver-7466d5c5cc- calico-apiserver 45411fe7-dc80-47ef-ab22-476dbc16d243 1156 0 2024-11-12 20:51:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7466d5c5cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-222 calico-apiserver-7466d5c5cc-sz8kp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9b614912b7 [] []}} ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.527 [INFO][6413] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.574 [INFO][6424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" HandleID="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.603 [INFO][6424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" HandleID="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002909b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-222", "pod":"calico-apiserver-7466d5c5cc-sz8kp", "timestamp":"2024-11-12 20:52:31.57401202 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.604 [INFO][6424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.604 [INFO][6424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.604 [INFO][6424] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.607 [INFO][6424] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.617 [INFO][6424] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.629 [INFO][6424] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.634 [INFO][6424] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.641 [INFO][6424] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.641 [INFO][6424] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.647 [INFO][6424] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3 Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.660 [INFO][6424] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.692 [INFO][6424] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.69/26] block=192.168.88.64/26 handle="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.692 [INFO][6424] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.69/26] handle="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" host="ip-172-31-18-222" Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.692 [INFO][6424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:31.828023 containerd[1962]: 2024-11-12 20:52:31.692 [INFO][6424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.69/26] IPv6=[] ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" HandleID="k8s-pod-network.b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.705 [INFO][6413] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"45411fe7-dc80-47ef-ab22-476dbc16d243", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"calico-apiserver-7466d5c5cc-sz8kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9b614912b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.705 [INFO][6413] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.69/32] ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.705 [INFO][6413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9b614912b7 ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.722 [INFO][6413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.726 [INFO][6413] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"45411fe7-dc80-47ef-ab22-476dbc16d243", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3", Pod:"calico-apiserver-7466d5c5cc-sz8kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9b614912b7", MAC:"3a:1a:2e:d5:5c:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:31.837342 containerd[1962]: 2024-11-12 20:52:31.801 [INFO][6413] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3" Namespace="calico-apiserver" Pod="calico-apiserver-7466d5c5cc-sz8kp" WorkloadEndpoint="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:31.989930 containerd[1962]: time="2024-11-12T20:52:31.989369135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:31.989930 containerd[1962]: time="2024-11-12T20:52:31.989521059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:31.989930 containerd[1962]: time="2024-11-12T20:52:31.989546970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:31.989930 containerd[1962]: time="2024-11-12T20:52:31.989672110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:32.124750 systemd[1]: Started cri-containerd-b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3.scope - libcontainer container b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3. Nov 12 20:52:32.309691 containerd[1962]: time="2024-11-12T20:52:32.309398103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7466d5c5cc-sz8kp,Uid:45411fe7-dc80-47ef-ab22-476dbc16d243,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3\"" Nov 12 20:52:32.323639 containerd[1962]: time="2024-11-12T20:52:32.323479906Z" level=info msg="CreateContainer within sandbox \"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:52:32.492720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671962769.mount: Deactivated successfully. Nov 12 20:52:32.540337 containerd[1962]: time="2024-11-12T20:52:32.535932441Z" level=info msg="CreateContainer within sandbox \"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"316cefcf25acde12ed10447d479a3d4331c8d78da2ca04fe2fb23380d7c206ce\"" Nov 12 20:52:32.543851 containerd[1962]: time="2024-11-12T20:52:32.543723433Z" level=info msg="StartContainer for \"316cefcf25acde12ed10447d479a3d4331c8d78da2ca04fe2fb23380d7c206ce\"" Nov 12 20:52:32.655381 systemd[1]: Started cri-containerd-316cefcf25acde12ed10447d479a3d4331c8d78da2ca04fe2fb23380d7c206ce.scope - libcontainer container 316cefcf25acde12ed10447d479a3d4331c8d78da2ca04fe2fb23380d7c206ce. Nov 12 20:52:32.853225 systemd-networkd[1812]: calie9b614912b7: Gained IPv6LL Nov 12 20:52:33.212803 containerd[1962]: time="2024-11-12T20:52:33.212752702Z" level=info msg="StartContainer for \"316cefcf25acde12ed10447d479a3d4331c8d78da2ca04fe2fb23380d7c206ce\" returns successfully" Nov 12 20:52:33.270640 containerd[1962]: time="2024-11-12T20:52:33.270543508Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.463 [INFO][6539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.465 [INFO][6539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" iface="eth0" netns="/var/run/netns/cni-f48b3711-a7b8-d264-960d-2c796f49ce05" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.467 [INFO][6539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" iface="eth0" netns="/var/run/netns/cni-f48b3711-a7b8-d264-960d-2c796f49ce05" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.467 [INFO][6539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" iface="eth0" netns="/var/run/netns/cni-f48b3711-a7b8-d264-960d-2c796f49ce05" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.467 [INFO][6539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.467 [INFO][6539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.564 [INFO][6548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.566 [INFO][6548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.566 [INFO][6548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.576 [WARNING][6548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.576 [INFO][6548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.579 [INFO][6548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:33.585550 containerd[1962]: 2024-11-12 20:52:33.582 [INFO][6539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:33.587506 containerd[1962]: time="2024-11-12T20:52:33.586111573Z" level=info msg="TearDown network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" successfully" Nov 12 20:52:33.587506 containerd[1962]: time="2024-11-12T20:52:33.586147623Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" returns successfully" Nov 12 20:52:33.592689 containerd[1962]: time="2024-11-12T20:52:33.592644052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kwmk,Uid:e5610ae2-edd8-4453-84a0-6212f12ff6f6,Namespace:kube-system,Attempt:1,}" Nov 12 20:52:33.596875 systemd[1]: run-netns-cni\x2df48b3711\x2da7b8\x2dd264\x2d960d\x2d2c796f49ce05.mount: Deactivated successfully. Nov 12 20:52:33.933772 systemd-networkd[1812]: cali4bc92446d9d: Link UP Nov 12 20:52:33.934025 systemd-networkd[1812]: cali4bc92446d9d: Gained carrier Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.722 [INFO][6555] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0 coredns-7db6d8ff4d- kube-system e5610ae2-edd8-4453-84a0-6212f12ff6f6 1175 0 2024-11-12 20:51:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-222 coredns-7db6d8ff4d-8kwmk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4bc92446d9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.723 [INFO][6555] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.829 [INFO][6565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" HandleID="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.846 [INFO][6565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" HandleID="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec610), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-222", "pod":"coredns-7db6d8ff4d-8kwmk", "timestamp":"2024-11-12 20:52:33.829967124 +0000 UTC"}, Hostname:"ip-172-31-18-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.846 [INFO][6565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.846 [INFO][6565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.847 [INFO][6565] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-222' Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.850 [INFO][6565] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.864 [INFO][6565] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.874 [INFO][6565] ipam/ipam.go 489: Trying affinity for 192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.878 [INFO][6565] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.889 [INFO][6565] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.889 [INFO][6565] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.893 [INFO][6565] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.900 [INFO][6565] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.918 [INFO][6565] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.70/26] block=192.168.88.64/26 handle="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.918 [INFO][6565] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.70/26] handle="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" host="ip-172-31-18-222" Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.918 [INFO][6565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:33.966594 containerd[1962]: 2024-11-12 20:52:33.919 [INFO][6565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.70/26] IPv6=[] ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" HandleID="k8s-pod-network.ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.924 [INFO][6555] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5610ae2-edd8-4453-84a0-6212f12ff6f6", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"", Pod:"coredns-7db6d8ff4d-8kwmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bc92446d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.924 [INFO][6555] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.70/32] ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.925 [INFO][6555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bc92446d9d ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.930 [INFO][6555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.931 [INFO][6555] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5610ae2-edd8-4453-84a0-6212f12ff6f6", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c", Pod:"coredns-7db6d8ff4d-8kwmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bc92446d9d", MAC:"4e:af:d8:a4:3a:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:33.969494 containerd[1962]: 2024-11-12 20:52:33.960 [INFO][6555] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8kwmk" WorkloadEndpoint="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:34.139248 containerd[1962]: time="2024-11-12T20:52:34.137520699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:34.139248 containerd[1962]: time="2024-11-12T20:52:34.137617367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:34.139248 containerd[1962]: time="2024-11-12T20:52:34.137640680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:34.139248 containerd[1962]: time="2024-11-12T20:52:34.137759001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:34.233537 systemd[1]: Started cri-containerd-ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c.scope - libcontainer container ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c. Nov 12 20:52:34.239643 kubelet[3519]: I1112 20:52:34.238884 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7466d5c5cc-sz8kp" podStartSLOduration=81.23885707 podStartE2EDuration="1m21.23885707s" podCreationTimestamp="2024-11-12 20:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:52:34.232146233 +0000 UTC m=+105.251999769" watchObservedRunningTime="2024-11-12 20:52:34.23885707 +0000 UTC m=+105.258710603" Nov 12 20:52:34.411567 containerd[1962]: time="2024-11-12T20:52:34.411523251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8kwmk,Uid:e5610ae2-edd8-4453-84a0-6212f12ff6f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c\"" Nov 12 20:52:34.418528 containerd[1962]: time="2024-11-12T20:52:34.418317221Z" level=info msg="CreateContainer within sandbox \"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:52:34.449023 containerd[1962]: time="2024-11-12T20:52:34.447445166Z" level=info msg="CreateContainer within sandbox \"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cf610aabf37e22b4a144ef3828d3308da63ac291f2c185cf6f66fbe634365bc\"" Nov 12 20:52:34.449023 containerd[1962]: time="2024-11-12T20:52:34.448958827Z" level=info msg="StartContainer for \"7cf610aabf37e22b4a144ef3828d3308da63ac291f2c185cf6f66fbe634365bc\"" Nov 12 20:52:34.522182 containerd[1962]: time="2024-11-12T20:52:34.520907962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.526198 containerd[1962]: time="2024-11-12T20:52:34.526115986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:52:34.529724 containerd[1962]: time="2024-11-12T20:52:34.528754423Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.542636 containerd[1962]: time="2024-11-12T20:52:34.541822617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.545265 systemd[1]: Started cri-containerd-7cf610aabf37e22b4a144ef3828d3308da63ac291f2c185cf6f66fbe634365bc.scope - libcontainer container 7cf610aabf37e22b4a144ef3828d3308da63ac291f2c185cf6f66fbe634365bc. Nov 12 20:52:34.552093 containerd[1962]: time="2024-11-12T20:52:34.550996264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 5.939211117s" Nov 12 20:52:34.552093 containerd[1962]: time="2024-11-12T20:52:34.551149548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:52:34.599115 containerd[1962]: time="2024-11-12T20:52:34.597349687Z" level=info msg="CreateContainer within sandbox \"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:52:34.638627 containerd[1962]: time="2024-11-12T20:52:34.638584738Z" level=info msg="StartContainer for \"7cf610aabf37e22b4a144ef3828d3308da63ac291f2c185cf6f66fbe634365bc\" returns successfully" Nov 12 20:52:34.647705 containerd[1962]: time="2024-11-12T20:52:34.647656932Z" level=info msg="CreateContainer within sandbox \"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd\"" Nov 12 20:52:34.649088 containerd[1962]: time="2024-11-12T20:52:34.648724947Z" level=info msg="StartContainer for \"7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd\"" Nov 12 20:52:34.717563 systemd[1]: Started cri-containerd-7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd.scope - libcontainer container 7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd. Nov 12 20:52:34.810776 containerd[1962]: time="2024-11-12T20:52:34.810668821Z" level=info msg="StartContainer for \"7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd\" returns successfully" Nov 12 20:52:35.242770 kubelet[3519]: I1112 20:52:35.242702 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-576c7b7594-qkhxb" podStartSLOduration=73.327732482 podStartE2EDuration="1m21.242677017s" podCreationTimestamp="2024-11-12 20:51:14 +0000 UTC" firstStartedPulling="2024-11-12 20:52:26.637413283 +0000 UTC m=+97.657266794" lastFinishedPulling="2024-11-12 20:52:34.552357816 +0000 UTC m=+105.572211329" observedRunningTime="2024-11-12 20:52:35.241596793 +0000 UTC m=+106.261450317" watchObservedRunningTime="2024-11-12 20:52:35.242677017 +0000 UTC m=+106.262530549" Nov 12 20:52:35.275891 kubelet[3519]: I1112 20:52:35.275824 3519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8kwmk" podStartSLOduration=93.275707473 podStartE2EDuration="1m33.275707473s" podCreationTimestamp="2024-11-12 20:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:52:35.271035749 +0000 UTC m=+106.290889281" watchObservedRunningTime="2024-11-12 20:52:35.275707473 +0000 UTC m=+106.295561007" Nov 12 20:52:35.668420 systemd-networkd[1812]: cali4bc92446d9d: Gained IPv6LL Nov 12 20:52:35.991702 systemd[1]: Started sshd@21-172.31.18.222:22-139.178.89.65:34730.service - OpenSSH per-connection server daemon (139.178.89.65:34730). Nov 12 20:52:36.199165 sshd[6742]: Accepted publickey for core from 139.178.89.65 port 34730 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:36.202351 sshd[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:36.209895 systemd-logind[1947]: New session 22 of user core. Nov 12 20:52:36.218324 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:52:37.056037 sshd[6742]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:37.067632 systemd[1]: sshd@21-172.31.18.222:22-139.178.89.65:34730.service: Deactivated successfully. Nov 12 20:52:37.074732 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:52:37.076207 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:52:37.112790 systemd[1]: Started sshd@22-172.31.18.222:22-139.178.89.65:60286.service - OpenSSH per-connection server daemon (139.178.89.65:60286). Nov 12 20:52:37.114954 systemd-logind[1947]: Removed session 22. Nov 12 20:52:37.334326 sshd[6758]: Accepted publickey for core from 139.178.89.65 port 60286 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:37.335272 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:37.344410 systemd-logind[1947]: New session 23 of user core. Nov 12 20:52:37.354505 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:52:38.148969 sshd[6758]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:38.156823 systemd[1]: sshd@22-172.31.18.222:22-139.178.89.65:60286.service: Deactivated successfully. Nov 12 20:52:38.160239 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:52:38.162385 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:52:38.163657 systemd-logind[1947]: Removed session 23. Nov 12 20:52:38.182326 systemd[1]: Started sshd@23-172.31.18.222:22-139.178.89.65:60292.service - OpenSSH per-connection server daemon (139.178.89.65:60292). Nov 12 20:52:38.428393 sshd[6769]: Accepted publickey for core from 139.178.89.65 port 60292 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:38.432280 sshd[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:38.442356 systemd-logind[1947]: New session 24 of user core. Nov 12 20:52:38.448426 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:52:38.478876 ntpd[1940]: Listen normally on 13 calie9b614912b7 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:52:38.479785 ntpd[1940]: 12 Nov 20:52:38 ntpd[1940]: Listen normally on 13 calie9b614912b7 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:52:38.479785 ntpd[1940]: 12 Nov 20:52:38 ntpd[1940]: Listen normally on 14 cali4bc92446d9d [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:52:38.479048 ntpd[1940]: Listen normally on 14 cali4bc92446d9d [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:52:41.660116 sshd[6769]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:41.667975 systemd[1]: sshd@23-172.31.18.222:22-139.178.89.65:60292.service: Deactivated successfully. Nov 12 20:52:41.672390 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:52:41.675018 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:52:41.676984 systemd-logind[1947]: Removed session 24. Nov 12 20:52:41.695749 systemd[1]: Started sshd@24-172.31.18.222:22-139.178.89.65:60300.service - OpenSSH per-connection server daemon (139.178.89.65:60300). Nov 12 20:52:41.930144 sshd[6791]: Accepted publickey for core from 139.178.89.65 port 60300 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:41.932224 sshd[6791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:41.941263 systemd-logind[1947]: New session 25 of user core. Nov 12 20:52:41.947279 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:52:43.325576 sshd[6791]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:43.336362 systemd[1]: sshd@24-172.31.18.222:22-139.178.89.65:60300.service: Deactivated successfully. Nov 12 20:52:43.343689 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:52:43.360707 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:52:43.367718 systemd[1]: Started sshd@25-172.31.18.222:22-139.178.89.65:60306.service - OpenSSH per-connection server daemon (139.178.89.65:60306). Nov 12 20:52:43.373518 systemd-logind[1947]: Removed session 25. Nov 12 20:52:43.595767 sshd[6802]: Accepted publickey for core from 139.178.89.65 port 60306 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:43.597188 sshd[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:43.603871 systemd-logind[1947]: New session 26 of user core. Nov 12 20:52:43.609260 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:52:43.866960 sshd[6802]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:43.873574 systemd[1]: sshd@25-172.31.18.222:22-139.178.89.65:60306.service: Deactivated successfully. Nov 12 20:52:43.878044 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:52:43.879945 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:52:43.881659 systemd-logind[1947]: Removed session 26. Nov 12 20:52:45.401598 systemd[1]: run-containerd-runc-k8s.io-c7cd2bc25f9cb0076f9a240d6607427013abe3b0750ac0824cda98240253ea6a-runc.2Wxyo2.mount: Deactivated successfully. Nov 12 20:52:48.934196 systemd[1]: Started sshd@26-172.31.18.222:22-139.178.89.65:42506.service - OpenSSH per-connection server daemon (139.178.89.65:42506). Nov 12 20:52:49.255221 containerd[1962]: time="2024-11-12T20:52:49.255024989Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:49.311462 sshd[6844]: Accepted publickey for core from 139.178.89.65 port 42506 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:49.333949 sshd[6844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:49.385054 systemd-logind[1947]: New session 27 of user core. Nov 12 20:52:49.387296 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:49.988 [WARNING][6868] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0", GenerateName:"calico-kube-controllers-576c7b7594-", Namespace:"calico-system", SelfLink:"", UID:"e2e7be2e-a36d-4ab1-8ede-5e860bf07447", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576c7b7594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66", Pod:"calico-kube-controllers-576c7b7594-qkhxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5183fea9279", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:49.990 [INFO][6868] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:49.990 [INFO][6868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" iface="eth0" netns="" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:49.990 [INFO][6868] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:49.990 [INFO][6868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.392 [INFO][6879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.407 [INFO][6879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.409 [INFO][6879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.473 [WARNING][6879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.473 [INFO][6879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.485 [INFO][6879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:50.510996 containerd[1962]: 2024-11-12 20:52:50.497 [INFO][6868] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:50.549473 containerd[1962]: time="2024-11-12T20:52:50.547830641Z" level=info msg="TearDown network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" successfully" Nov 12 20:52:50.549473 containerd[1962]: time="2024-11-12T20:52:50.547886017Z" level=info msg="StopPodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" returns successfully" Nov 12 20:52:50.550019 sshd[6844]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:50.564308 systemd[1]: sshd@26-172.31.18.222:22-139.178.89.65:42506.service: Deactivated successfully. Nov 12 20:52:50.570082 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:52:50.579327 systemd-logind[1947]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:52:50.581677 systemd-logind[1947]: Removed session 27. Nov 12 20:52:51.215298 containerd[1962]: time="2024-11-12T20:52:51.215142221Z" level=info msg="RemovePodSandbox for \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:51.225478 containerd[1962]: time="2024-11-12T20:52:51.225159701Z" level=info msg="Forcibly stopping sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\"" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.380 [WARNING][6903] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0", GenerateName:"calico-kube-controllers-576c7b7594-", Namespace:"calico-system", SelfLink:"", UID:"e2e7be2e-a36d-4ab1-8ede-5e860bf07447", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576c7b7594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"89f55a05cb5ad8f49c0ad542e6642a1c5481d198d062d7d4a4caa05614c17e66", Pod:"calico-kube-controllers-576c7b7594-qkhxb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5183fea9279", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.382 [INFO][6903] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.382 [INFO][6903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" iface="eth0" netns="" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.382 [INFO][6903] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.382 [INFO][6903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.464 [INFO][6910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.465 [INFO][6910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.465 [INFO][6910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.474 [WARNING][6910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.474 [INFO][6910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" HandleID="k8s-pod-network.e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Workload="ip--172--31--18--222-k8s-calico--kube--controllers--576c7b7594--qkhxb-eth0" Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.476 [INFO][6910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:51.489535 containerd[1962]: 2024-11-12 20:52:51.483 [INFO][6903] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e" Nov 12 20:52:51.494506 containerd[1962]: time="2024-11-12T20:52:51.490261337Z" level=info msg="TearDown network for sandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" successfully" Nov 12 20:52:51.597170 containerd[1962]: time="2024-11-12T20:52:51.597115967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:51.618749 containerd[1962]: time="2024-11-12T20:52:51.616309962Z" level=info msg="RemovePodSandbox \"e75062909ddf7c1db36219708aba2e106ebcad64f68ce551291f8c1bfec8eb7e\" returns successfully" Nov 12 20:52:51.650828 containerd[1962]: time="2024-11-12T20:52:51.650783102Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.706 [WARNING][6928] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9630a59-9ff8-489c-b4ee-f423326fdc24", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928", Pod:"coredns-7db6d8ff4d-zhn5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9649997b03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.707 [INFO][6928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.707 [INFO][6928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" iface="eth0" netns="" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.707 [INFO][6928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.707 [INFO][6928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.780 [INFO][6934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.782 [INFO][6934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.782 [INFO][6934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.800 [WARNING][6934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.801 [INFO][6934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.803 [INFO][6934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:51.810415 containerd[1962]: 2024-11-12 20:52:51.807 [INFO][6928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.810415 containerd[1962]: time="2024-11-12T20:52:51.809601490Z" level=info msg="TearDown network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" successfully" Nov 12 20:52:51.815247 containerd[1962]: time="2024-11-12T20:52:51.810882730Z" level=info msg="StopPodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" returns successfully" Nov 12 20:52:51.815247 containerd[1962]: time="2024-11-12T20:52:51.812322322Z" level=info msg="RemovePodSandbox for \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:52:51.815247 containerd[1962]: time="2024-11-12T20:52:51.812356678Z" level=info msg="Forcibly stopping sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\"" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.900 [WARNING][6952] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9630a59-9ff8-489c-b4ee-f423326fdc24", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"9e0efe5ac8b7e690402eef57bf88e47f0bad0d6f5f4e88b3e32868f735a34928", Pod:"coredns-7db6d8ff4d-zhn5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9649997b03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.901 [INFO][6952] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.901 [INFO][6952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" iface="eth0" netns="" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.901 [INFO][6952] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.901 [INFO][6952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.949 [INFO][6959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.949 [INFO][6959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.949 [INFO][6959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.958 [WARNING][6959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.958 [INFO][6959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" HandleID="k8s-pod-network.81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--zhn5h-eth0" Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.960 [INFO][6959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:51.964977 containerd[1962]: 2024-11-12 20:52:51.963 [INFO][6952] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10" Nov 12 20:52:51.966765 containerd[1962]: time="2024-11-12T20:52:51.965079660Z" level=info msg="TearDown network for sandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" successfully" Nov 12 20:52:52.053747 containerd[1962]: time="2024-11-12T20:52:52.051602532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:52.053747 containerd[1962]: time="2024-11-12T20:52:52.051796893Z" level=info msg="RemovePodSandbox \"81d3511c937b3be6a07ffabca45dd81db52f159b3a2cfff38b1b905d9a113c10\" returns successfully" Nov 12 20:52:52.101151 containerd[1962]: time="2024-11-12T20:52:52.100302415Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.238 [WARNING][6977] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d", Pod:"calico-apiserver-7466d5c5cc-hqszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ebfe6f3ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.239 [INFO][6977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.239 [INFO][6977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" iface="eth0" netns="" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.240 [INFO][6977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.240 [INFO][6977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.358 [INFO][6983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.360 [INFO][6983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.360 [INFO][6983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.393 [WARNING][6983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.394 [INFO][6983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.399 [INFO][6983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:52.410901 containerd[1962]: 2024-11-12 20:52:52.406 [INFO][6977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.414106 containerd[1962]: time="2024-11-12T20:52:52.410906128Z" level=info msg="TearDown network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" successfully" Nov 12 20:52:52.414106 containerd[1962]: time="2024-11-12T20:52:52.410962483Z" level=info msg="StopPodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" returns successfully" Nov 12 20:52:52.416114 containerd[1962]: time="2024-11-12T20:52:52.415797028Z" level=info msg="RemovePodSandbox for \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:52:52.416114 containerd[1962]: time="2024-11-12T20:52:52.415874267Z" level=info msg="Forcibly stopping sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\"" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.560 [WARNING][7002] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bebdd9c-33eb-4dd5-9f95-d6fe88644bc6", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"f9736e80609fd8e5deed9c5b9891240bbedd356bfb18aa9d99bff571e814932d", Pod:"calico-apiserver-7466d5c5cc-hqszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ebfe6f3ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.561 [INFO][7002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.561 [INFO][7002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" iface="eth0" netns="" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.561 [INFO][7002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.561 [INFO][7002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.595 [INFO][7009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.595 [INFO][7009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.596 [INFO][7009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.602 [WARNING][7009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.602 [INFO][7009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" HandleID="k8s-pod-network.1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--hqszx-eth0" Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.605 [INFO][7009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:52.609223 containerd[1962]: 2024-11-12 20:52:52.607 [INFO][7002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f" Nov 12 20:52:52.610393 containerd[1962]: time="2024-11-12T20:52:52.609371333Z" level=info msg="TearDown network for sandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" successfully" Nov 12 20:52:52.615402 containerd[1962]: time="2024-11-12T20:52:52.615362663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:52.615550 containerd[1962]: time="2024-11-12T20:52:52.615440005Z" level=info msg="RemovePodSandbox \"1a49fa78ea45b5e11ec8762c2846fa657aabaac76eedfad9b296d0ae5535263f\" returns successfully" Nov 12 20:52:52.616113 containerd[1962]: time="2024-11-12T20:52:52.616083261Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.679 [WARNING][7027] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5610ae2-edd8-4453-84a0-6212f12ff6f6", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c", Pod:"coredns-7db6d8ff4d-8kwmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bc92446d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.679 [INFO][7027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.679 [INFO][7027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" iface="eth0" netns="" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.679 [INFO][7027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.679 [INFO][7027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.709 [INFO][7033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.709 [INFO][7033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.709 [INFO][7033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.719 [WARNING][7033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.719 [INFO][7033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.722 [INFO][7033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:52.730136 containerd[1962]: 2024-11-12 20:52:52.726 [INFO][7027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.730136 containerd[1962]: time="2024-11-12T20:52:52.729435390Z" level=info msg="TearDown network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" successfully" Nov 12 20:52:52.730136 containerd[1962]: time="2024-11-12T20:52:52.729484769Z" level=info msg="StopPodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" returns successfully" Nov 12 20:52:52.735149 containerd[1962]: time="2024-11-12T20:52:52.730161333Z" level=info msg="RemovePodSandbox for \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:52.735149 containerd[1962]: time="2024-11-12T20:52:52.730193591Z" level=info msg="Forcibly stopping sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\"" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.811 [WARNING][7051] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5610ae2-edd8-4453-84a0-6212f12ff6f6", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"ea525f296909c335e8673629874ed5a98edb05e18fac6e0c13cedb209c11b31c", Pod:"coredns-7db6d8ff4d-8kwmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bc92446d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.813 [INFO][7051] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.813 [INFO][7051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" iface="eth0" netns="" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.813 [INFO][7051] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.813 [INFO][7051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.871 [INFO][7057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.871 [INFO][7057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.872 [INFO][7057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.883 [WARNING][7057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.883 [INFO][7057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" HandleID="k8s-pod-network.1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Workload="ip--172--31--18--222-k8s-coredns--7db6d8ff4d--8kwmk-eth0" Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.886 [INFO][7057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:52.892500 containerd[1962]: 2024-11-12 20:52:52.889 [INFO][7051] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8" Nov 12 20:52:52.894730 containerd[1962]: time="2024-11-12T20:52:52.893192494Z" level=info msg="TearDown network for sandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" successfully" Nov 12 20:52:52.899425 containerd[1962]: time="2024-11-12T20:52:52.899160789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:52.899425 containerd[1962]: time="2024-11-12T20:52:52.899305150Z" level=info msg="RemovePodSandbox \"1d2e96ce085676a224f288aaea5e047769eba01d2451361ad9414d07cc62fda8\" returns successfully" Nov 12 20:52:52.901853 containerd[1962]: time="2024-11-12T20:52:52.901484063Z" level=info msg="StopPodSandbox for \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\"" Nov 12 20:52:52.901853 containerd[1962]: time="2024-11-12T20:52:52.901599152Z" level=info msg="TearDown network for sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" successfully" Nov 12 20:52:52.901853 containerd[1962]: time="2024-11-12T20:52:52.901616389Z" level=info msg="StopPodSandbox for \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" returns successfully" Nov 12 20:52:52.903394 containerd[1962]: time="2024-11-12T20:52:52.903115370Z" level=info msg="RemovePodSandbox for \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\"" Nov 12 20:52:52.903394 containerd[1962]: time="2024-11-12T20:52:52.903143439Z" level=info msg="Forcibly stopping sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\"" Nov 12 20:52:52.904636 containerd[1962]: time="2024-11-12T20:52:52.903964879Z" level=info msg="TearDown network for sandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" successfully" Nov 12 20:52:52.911127 containerd[1962]: time="2024-11-12T20:52:52.911053074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:52.911350 containerd[1962]: time="2024-11-12T20:52:52.911166931Z" level=info msg="RemovePodSandbox \"65a39acd773efd1f1ce4d5013319378dc7e379ac8f58771814a57b7bf5eb2169\" returns successfully" Nov 12 20:52:52.913432 containerd[1962]: time="2024-11-12T20:52:52.911926206Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.968 [WARNING][7076] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"45411fe7-dc80-47ef-ab22-476dbc16d243", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3", Pod:"calico-apiserver-7466d5c5cc-sz8kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9b614912b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.969 [INFO][7076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.969 [INFO][7076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" iface="eth0" netns="" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.969 [INFO][7076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.969 [INFO][7076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.998 [INFO][7082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.998 [INFO][7082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:52.998 [INFO][7082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:53.005 [WARNING][7082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:53.005 [INFO][7082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:53.007 [INFO][7082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:53.012328 containerd[1962]: 2024-11-12 20:52:53.008 [INFO][7076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.012328 containerd[1962]: time="2024-11-12T20:52:53.011013549Z" level=info msg="TearDown network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" successfully" Nov 12 20:52:53.012328 containerd[1962]: time="2024-11-12T20:52:53.011044358Z" level=info msg="StopPodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" returns successfully" Nov 12 20:52:53.012328 containerd[1962]: time="2024-11-12T20:52:53.011561866Z" level=info msg="RemovePodSandbox for \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:53.012328 containerd[1962]: time="2024-11-12T20:52:53.011594833Z" level=info msg="Forcibly stopping sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\"" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.063 [WARNING][7100] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0", GenerateName:"calico-apiserver-7466d5c5cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"45411fe7-dc80-47ef-ab22-476dbc16d243", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7466d5c5cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"b294bf932d90a4b7c3e91069e9a48e3972c33b8e8d1cfc13540c6c19b009bad3", Pod:"calico-apiserver-7466d5c5cc-sz8kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9b614912b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.063 [INFO][7100] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.063 [INFO][7100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" iface="eth0" netns="" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.063 [INFO][7100] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.063 [INFO][7100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.100 [INFO][7106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.100 [INFO][7106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.101 [INFO][7106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.108 [WARNING][7106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.109 [INFO][7106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" HandleID="k8s-pod-network.be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Workload="ip--172--31--18--222-k8s-calico--apiserver--7466d5c5cc--sz8kp-eth0" Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.110 [INFO][7106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:53.114141 containerd[1962]: 2024-11-12 20:52:53.112 [INFO][7100] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898" Nov 12 20:52:53.114938 containerd[1962]: time="2024-11-12T20:52:53.114876917Z" level=info msg="TearDown network for sandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" successfully" Nov 12 20:52:53.120989 containerd[1962]: time="2024-11-12T20:52:53.120926961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:53.121235 containerd[1962]: time="2024-11-12T20:52:53.121009901Z" level=info msg="RemovePodSandbox \"be18c4fcf41e270f9c2f64d59fcc9c773d10a7b60adb2517026ce15495606898\" returns successfully" Nov 12 20:52:53.122126 containerd[1962]: time="2024-11-12T20:52:53.121790500Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.183 [WARNING][7124] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"579a082e-68d8-4be8-b0d9-8983607906fe", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88", Pod:"csi-node-driver-xw4fv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdb21aa7bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.183 [INFO][7124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.183 [INFO][7124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" iface="eth0" netns="" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.184 [INFO][7124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.184 [INFO][7124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.224 [INFO][7130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.224 [INFO][7130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.224 [INFO][7130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.233 [WARNING][7130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.233 [INFO][7130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.235 [INFO][7130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:53.239928 containerd[1962]: 2024-11-12 20:52:53.237 [INFO][7124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.242289 containerd[1962]: time="2024-11-12T20:52:53.239977625Z" level=info msg="TearDown network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" successfully" Nov 12 20:52:53.242289 containerd[1962]: time="2024-11-12T20:52:53.240007485Z" level=info msg="StopPodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" returns successfully" Nov 12 20:52:53.242289 containerd[1962]: time="2024-11-12T20:52:53.240947620Z" level=info msg="RemovePodSandbox for \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:52:53.242289 containerd[1962]: time="2024-11-12T20:52:53.240986957Z" level=info msg="Forcibly stopping sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\"" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.325 [WARNING][7148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"579a082e-68d8-4be8-b0d9-8983607906fe", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 51, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-222", ContainerID:"a859dd1fa178d6d817c48af8c700626317b3e673e838b54a27072005771aef88", Pod:"csi-node-driver-xw4fv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdb21aa7bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.325 [INFO][7148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.325 [INFO][7148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" iface="eth0" netns="" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.325 [INFO][7148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.325 [INFO][7148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.355 [INFO][7155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.356 [INFO][7155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.356 [INFO][7155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.364 [WARNING][7155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.364 [INFO][7155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" HandleID="k8s-pod-network.e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Workload="ip--172--31--18--222-k8s-csi--node--driver--xw4fv-eth0" Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.368 [INFO][7155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:52:53.376448 containerd[1962]: 2024-11-12 20:52:53.372 [INFO][7148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65" Nov 12 20:52:53.376448 containerd[1962]: time="2024-11-12T20:52:53.375043946Z" level=info msg="TearDown network for sandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" successfully" Nov 12 20:52:53.385757 containerd[1962]: time="2024-11-12T20:52:53.385710838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:52:53.385985 containerd[1962]: time="2024-11-12T20:52:53.385954922Z" level=info msg="RemovePodSandbox \"e1bb5e08b067be6e8f4063b7a0f7fe1887c3fba5b176045b0cd678342bedeb65\" returns successfully" Nov 12 20:52:55.593947 systemd[1]: Started sshd@27-172.31.18.222:22-139.178.89.65:42512.service - OpenSSH per-connection server daemon (139.178.89.65:42512). Nov 12 20:52:55.902399 sshd[7179]: Accepted publickey for core from 139.178.89.65 port 42512 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:52:55.905159 sshd[7179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:55.914893 systemd-logind[1947]: New session 28 of user core. Nov 12 20:52:55.918307 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:52:56.499171 sshd[7179]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:56.502567 systemd[1]: sshd@27-172.31.18.222:22-139.178.89.65:42512.service: Deactivated successfully. Nov 12 20:52:56.505332 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:52:56.508372 systemd-logind[1947]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:52:56.509719 systemd-logind[1947]: Removed session 28. Nov 12 20:53:01.541324 systemd[1]: Started sshd@28-172.31.18.222:22-139.178.89.65:40170.service - OpenSSH per-connection server daemon (139.178.89.65:40170). Nov 12 20:53:01.757290 sshd[7195]: Accepted publickey for core from 139.178.89.65 port 40170 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:53:01.758131 sshd[7195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:01.772249 systemd-logind[1947]: New session 29 of user core. Nov 12 20:53:01.778688 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:53:02.876817 sshd[7195]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:02.915054 systemd[1]: sshd@28-172.31.18.222:22-139.178.89.65:40170.service: Deactivated successfully. Nov 12 20:53:02.924598 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:53:02.937441 systemd-logind[1947]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:53:02.943167 systemd-logind[1947]: Removed session 29. Nov 12 20:53:07.915388 systemd[1]: Started sshd@29-172.31.18.222:22-139.178.89.65:50172.service - OpenSSH per-connection server daemon (139.178.89.65:50172). Nov 12 20:53:08.131751 sshd[7235]: Accepted publickey for core from 139.178.89.65 port 50172 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:53:08.133963 sshd[7235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:08.139932 systemd-logind[1947]: New session 30 of user core. Nov 12 20:53:08.144269 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 20:53:08.467335 sshd[7235]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:08.471862 systemd[1]: sshd@29-172.31.18.222:22-139.178.89.65:50172.service: Deactivated successfully. Nov 12 20:53:08.475117 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 20:53:08.476162 systemd-logind[1947]: Session 30 logged out. Waiting for processes to exit. Nov 12 20:53:08.477406 systemd-logind[1947]: Removed session 30. Nov 12 20:53:13.510500 systemd[1]: Started sshd@30-172.31.18.222:22-139.178.89.65:50178.service - OpenSSH per-connection server daemon (139.178.89.65:50178). Nov 12 20:53:13.746106 sshd[7248]: Accepted publickey for core from 139.178.89.65 port 50178 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:53:13.746815 sshd[7248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:13.772407 systemd-logind[1947]: New session 31 of user core. Nov 12 20:53:13.777325 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 12 20:53:13.992249 sshd[7248]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:13.998299 systemd[1]: sshd@30-172.31.18.222:22-139.178.89.65:50178.service: Deactivated successfully. Nov 12 20:53:14.000818 systemd[1]: session-31.scope: Deactivated successfully. Nov 12 20:53:14.001904 systemd-logind[1947]: Session 31 logged out. Waiting for processes to exit. Nov 12 20:53:14.004475 systemd-logind[1947]: Removed session 31. Nov 12 20:53:19.032538 systemd[1]: Started sshd@31-172.31.18.222:22-139.178.89.65:44564.service - OpenSSH per-connection server daemon (139.178.89.65:44564). Nov 12 20:53:19.249138 sshd[7285]: Accepted publickey for core from 139.178.89.65 port 44564 ssh2: RSA SHA256:bYvsvjo5KZuZ/ba4s3N7Mtx2vQRiUN+Fm555+7wZnNg Nov 12 20:53:19.251395 sshd[7285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:19.266178 systemd-logind[1947]: New session 32 of user core. Nov 12 20:53:19.272372 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 12 20:53:20.172753 sshd[7285]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:20.186057 systemd[1]: sshd@31-172.31.18.222:22-139.178.89.65:44564.service: Deactivated successfully. Nov 12 20:53:20.194673 systemd[1]: session-32.scope: Deactivated successfully. Nov 12 20:53:20.196727 systemd-logind[1947]: Session 32 logged out. Waiting for processes to exit. Nov 12 20:53:20.200775 systemd-logind[1947]: Removed session 32. Nov 12 20:53:45.425012 systemd[1]: cri-containerd-1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb.scope: Deactivated successfully. Nov 12 20:53:45.426281 systemd[1]: cri-containerd-1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb.scope: Consumed 4.278s CPU time. Nov 12 20:53:45.731789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb-rootfs.mount: Deactivated successfully. Nov 12 20:53:45.769457 containerd[1962]: time="2024-11-12T20:53:45.743202422Z" level=info msg="shim disconnected" id=1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb namespace=k8s.io Nov 12 20:53:45.770118 containerd[1962]: time="2024-11-12T20:53:45.769457570Z" level=warning msg="cleaning up after shim disconnected" id=1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb namespace=k8s.io Nov 12 20:53:45.770118 containerd[1962]: time="2024-11-12T20:53:45.769477748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:46.509037 kubelet[3519]: I1112 20:53:46.508985 3519 scope.go:117] "RemoveContainer" containerID="1fc73b5e8130e33240acec2285f4c945a40da69d98d821902f22f7314e7575cb" Nov 12 20:53:46.585197 containerd[1962]: time="2024-11-12T20:53:46.585139803Z" level=info msg="CreateContainer within sandbox \"5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 20:53:46.658595 systemd[1]: cri-containerd-d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29.scope: Deactivated successfully. Nov 12 20:53:46.659663 systemd[1]: cri-containerd-d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29.scope: Consumed 3.647s CPU time, 22.7M memory peak, 0B memory swap peak. Nov 12 20:53:46.755099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29-rootfs.mount: Deactivated successfully. Nov 12 20:53:46.768844 containerd[1962]: time="2024-11-12T20:53:46.768781741Z" level=info msg="shim disconnected" id=d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29 namespace=k8s.io Nov 12 20:53:46.769599 containerd[1962]: time="2024-11-12T20:53:46.769012744Z" level=warning msg="cleaning up after shim disconnected" id=d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29 namespace=k8s.io Nov 12 20:53:46.769982 containerd[1962]: time="2024-11-12T20:53:46.769963135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:46.860719 containerd[1962]: time="2024-11-12T20:53:46.860671518Z" level=info msg="CreateContainer within sandbox \"5771091f10aaea0d5921ce80db1a9c9a60bea9bdecfc00bdc6a6bbf4ab5a0407\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1422d6f2e297403c7e2fe166401caea3d70cf92033fdabc4c3e9615909f0b93a\"" Nov 12 20:53:46.861947 containerd[1962]: time="2024-11-12T20:53:46.861911992Z" level=info msg="StartContainer for \"1422d6f2e297403c7e2fe166401caea3d70cf92033fdabc4c3e9615909f0b93a\"" Nov 12 20:53:46.933408 systemd[1]: Started cri-containerd-1422d6f2e297403c7e2fe166401caea3d70cf92033fdabc4c3e9615909f0b93a.scope - libcontainer container 1422d6f2e297403c7e2fe166401caea3d70cf92033fdabc4c3e9615909f0b93a. Nov 12 20:53:46.989680 containerd[1962]: time="2024-11-12T20:53:46.989448839Z" level=info msg="StartContainer for \"1422d6f2e297403c7e2fe166401caea3d70cf92033fdabc4c3e9615909f0b93a\" returns successfully" Nov 12 20:53:47.474559 kubelet[3519]: I1112 20:53:47.474529 3519 scope.go:117] "RemoveContainer" containerID="d926d6fd52c632a2b60aad536c406b453de8d1e69ec414e0c377e9486dc0cf29" Nov 12 20:53:47.483797 containerd[1962]: time="2024-11-12T20:53:47.483330172Z" level=info msg="CreateContainer within sandbox \"ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 20:53:47.508398 containerd[1962]: time="2024-11-12T20:53:47.508349610Z" level=info msg="CreateContainer within sandbox \"ce9f5ae3afe1b679653c0dabd517c7a21af73f7fee976d287eb9b90db0d3b149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"44cbf1f97a3141dc996d303803bce5baafae7f2d0186f459cbcc65874282378d\"" Nov 12 20:53:47.508984 containerd[1962]: time="2024-11-12T20:53:47.508953829Z" level=info msg="StartContainer for \"44cbf1f97a3141dc996d303803bce5baafae7f2d0186f459cbcc65874282378d\"" Nov 12 20:53:47.542310 systemd[1]: Started cri-containerd-44cbf1f97a3141dc996d303803bce5baafae7f2d0186f459cbcc65874282378d.scope - libcontainer container 44cbf1f97a3141dc996d303803bce5baafae7f2d0186f459cbcc65874282378d. Nov 12 20:53:47.602945 containerd[1962]: time="2024-11-12T20:53:47.602896613Z" level=info msg="StartContainer for \"44cbf1f97a3141dc996d303803bce5baafae7f2d0186f459cbcc65874282378d\" returns successfully" Nov 12 20:53:51.532272 systemd[1]: cri-containerd-ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238.scope: Deactivated successfully. Nov 12 20:53:51.535201 systemd[1]: cri-containerd-ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238.scope: Consumed 1.638s CPU time, 19.9M memory peak, 0B memory swap peak. Nov 12 20:53:51.575196 containerd[1962]: time="2024-11-12T20:53:51.574832510Z" level=info msg="shim disconnected" id=ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238 namespace=k8s.io Nov 12 20:53:51.575196 containerd[1962]: time="2024-11-12T20:53:51.574899008Z" level=warning msg="cleaning up after shim disconnected" id=ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238 namespace=k8s.io Nov 12 20:53:51.575196 containerd[1962]: time="2024-11-12T20:53:51.574915223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:51.581791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238-rootfs.mount: Deactivated successfully. Nov 12 20:53:52.057899 kubelet[3519]: E1112 20:53:52.057828 3519 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 12 20:53:52.501172 kubelet[3519]: I1112 20:53:52.500929 3519 scope.go:117] "RemoveContainer" containerID="ab9df49569d84f487cedce8598ac29d2569287929035af0578bad525fa3be238" Nov 12 20:53:52.504020 containerd[1962]: time="2024-11-12T20:53:52.503982821Z" level=info msg="CreateContainer within sandbox \"cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 20:53:52.526412 containerd[1962]: time="2024-11-12T20:53:52.526365741Z" level=info msg="CreateContainer within sandbox \"cbef58cad21faf62919af425aad6cbca74efbdb35f890928bed391cab98d4975\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e0c2f1f5a9457bb85c9d29910d587d2bb82b9d8fd7f3e2ee0a2c07e10f6699b5\"" Nov 12 20:53:52.528822 containerd[1962]: time="2024-11-12T20:53:52.528778874Z" level=info msg="StartContainer for \"e0c2f1f5a9457bb85c9d29910d587d2bb82b9d8fd7f3e2ee0a2c07e10f6699b5\"" Nov 12 20:53:52.590432 systemd[1]: Started cri-containerd-e0c2f1f5a9457bb85c9d29910d587d2bb82b9d8fd7f3e2ee0a2c07e10f6699b5.scope - libcontainer container e0c2f1f5a9457bb85c9d29910d587d2bb82b9d8fd7f3e2ee0a2c07e10f6699b5. Nov 12 20:53:52.663431 containerd[1962]: time="2024-11-12T20:53:52.663386139Z" level=info msg="StartContainer for \"e0c2f1f5a9457bb85c9d29910d587d2bb82b9d8fd7f3e2ee0a2c07e10f6699b5\" returns successfully" Nov 12 20:53:55.517835 systemd[1]: run-containerd-runc-k8s.io-7bcc673cdc9070c10d2f2e4e6884ad29a11924f924e5fffb4ae43756c45d20fd-runc.fcP25H.mount: Deactivated successfully. Nov 12 20:54:02.058791 kubelet[3519]: E1112 20:54:02.058666 3519 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"