Jul 2 00:15:32.026855 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:15:32.026898 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:15:32.026915 kernel: BIOS-provided physical RAM map: Jul 2 00:15:32.026927 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:15:32.026937 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:15:32.026950 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:15:32.026967 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 00:15:32.027028 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 00:15:32.027041 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 00:15:32.027053 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:15:32.027068 kernel: NX (Execute Disable) protection: active Jul 2 00:15:32.027079 kernel: APIC: Static calls initialized Jul 2 00:15:32.027089 kernel: SMBIOS 2.7 present. Jul 2 00:15:32.027101 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 00:15:32.027119 kernel: Hypervisor detected: KVM Jul 2 00:15:32.027132 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:15:32.027145 kernel: kvm-clock: using sched offset of 6250238844 cycles Jul 2 00:15:32.027161 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:15:32.027175 kernel: tsc: Detected 2499.996 MHz processor Jul 2 00:15:32.027189 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:15:32.027253 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:15:32.027270 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 00:15:32.027282 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:15:32.027294 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:15:32.027306 kernel: Using GB pages for direct mapping Jul 2 00:15:32.027367 kernel: ACPI: Early table checksum verification disabled Jul 2 00:15:32.027380 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 00:15:32.027392 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 00:15:32.027406 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:15:32.027420 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 00:15:32.027882 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 00:15:32.027900 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:15:32.027917 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:15:32.027929 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 00:15:32.027941 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:15:32.027955 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 00:15:32.027970 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 00:15:32.027985 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:15:32.028002 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 00:15:32.028015 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 00:15:32.028042 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 00:15:32.028056 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 00:15:32.028069 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 00:15:32.028084 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 00:15:32.028102 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 00:15:32.028115 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 00:15:32.028129 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 00:15:32.028145 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 00:15:32.028160 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:15:32.028175 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:15:32.028190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 00:15:32.028205 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 00:15:32.028285 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 00:15:32.028307 kernel: Zone ranges: Jul 2 00:15:32.028355 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:15:32.028370 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 00:15:32.028385 kernel: Normal empty Jul 2 00:15:32.028401 kernel: Movable zone start for each node Jul 2 00:15:32.028417 kernel: Early memory node ranges Jul 2 00:15:32.028431 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:15:32.028445 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 00:15:32.028460 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 00:15:32.028480 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:15:32.028497 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:15:32.028513 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 00:15:32.028529 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:15:32.028546 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:15:32.028562 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 00:15:32.028576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:15:32.028590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:15:32.028603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:15:32.028617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:15:32.028634 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:15:32.028648 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:15:32.028661 kernel: TSC deadline timer available Jul 2 00:15:32.028676 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:15:32.028689 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:15:32.028704 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 00:15:32.028718 kernel: Booting paravirtualized kernel on KVM Jul 2 00:15:32.028733 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:15:32.028747 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:15:32.028764 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:15:32.028778 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:15:32.028792 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:15:32.028806 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:15:32.028820 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:15:32.028910 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:15:32.028928 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:15:32.028942 kernel: random: crng init done Jul 2 00:15:32.028960 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:15:32.028974 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:15:32.028989 kernel: Fallback order for Node 0: 0 Jul 2 00:15:32.029003 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 00:15:32.029018 kernel: Policy zone: DMA32 Jul 2 00:15:32.029032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:15:32.029046 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:15:32.029061 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:15:32.029078 kernel: Kernel/User page tables isolation: enabled Jul 2 00:15:32.029093 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:15:32.029165 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:15:32.029184 kernel: Dynamic Preempt: voluntary Jul 2 00:15:32.029200 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:15:32.029216 kernel: rcu: RCU event tracing is enabled. Jul 2 00:15:32.029231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:15:32.029246 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:15:32.029261 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:15:32.029275 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:15:32.029294 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:15:32.029309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:15:32.029344 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:15:32.029357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:15:32.029370 kernel: Console: colour VGA+ 80x25 Jul 2 00:15:32.029383 kernel: printk: console [ttyS0] enabled Jul 2 00:15:32.029396 kernel: ACPI: Core revision 20230628 Jul 2 00:15:32.029409 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 00:15:32.029423 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:15:32.029443 kernel: x2apic enabled Jul 2 00:15:32.029459 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:15:32.029489 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:15:32.029506 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 2 00:15:32.029522 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:15:32.029537 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:15:32.029666 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:15:32.029683 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:15:32.029698 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:15:32.029721 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:15:32.029736 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:15:32.029750 kernel: RETBleed: Vulnerable Jul 2 00:15:32.029769 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:15:32.029783 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:15:32.029798 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:15:32.029813 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:15:32.029828 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:15:32.029909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:15:32.029931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:15:32.029947 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 00:15:32.029963 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 00:15:32.029979 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:15:32.029995 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:15:32.030011 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:15:32.030028 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 00:15:32.030044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:15:32.030060 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 00:15:32.030074 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 00:15:32.030088 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 00:15:32.030105 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 00:15:32.030119 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 00:15:32.030133 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 00:15:32.030149 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 00:15:32.030166 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:15:32.030182 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:15:32.030199 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:15:32.030215 kernel: SELinux: Initializing. Jul 2 00:15:32.030231 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:15:32.030244 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:15:32.030258 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:15:32.030273 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:15:32.030294 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:15:32.030307 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:15:32.030338 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:15:32.030353 kernel: signal: max sigframe size: 3632 Jul 2 00:15:32.030367 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:15:32.030384 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:15:32.030400 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:15:32.030418 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:15:32.030436 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:15:32.030458 kernel: .... node #0, CPUs: #1 Jul 2 00:15:32.030477 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 00:15:32.030495 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:15:32.030511 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:15:32.030526 kernel: smpboot: Max logical packages: 1 Jul 2 00:15:32.030541 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 2 00:15:32.030557 kernel: devtmpfs: initialized Jul 2 00:15:32.030574 kernel: x86/mm: Memory block size: 128MB Jul 2 00:15:32.030594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:15:32.030611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:15:32.030628 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:15:32.030645 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:15:32.030662 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:15:32.030679 kernel: audit: type=2000 audit(1719879330.839:1): state=initialized audit_enabled=0 res=1 Jul 2 00:15:32.030695 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:15:32.030713 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:15:32.030731 kernel: cpuidle: using governor menu Jul 2 00:15:32.030752 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:15:32.030769 kernel: dca service started, version 1.12.1 Jul 2 00:15:32.030786 kernel: PCI: Using configuration type 1 for base access Jul 2 00:15:32.030804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:15:32.030822 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:15:32.030838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:15:32.030855 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:15:32.030872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:15:32.030889 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:15:32.030910 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:15:32.030927 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:15:32.030945 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:15:32.030961 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 00:15:32.030978 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:15:32.030996 kernel: ACPI: Interpreter enabled Jul 2 00:15:32.031013 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:15:32.031030 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:15:32.031047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:15:32.031068 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:15:32.031086 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 00:15:32.031103 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:15:32.031388 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:15:32.031538 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:15:32.031668 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:15:32.031687 kernel: acpiphp: Slot [3] registered Jul 2 00:15:32.031708 kernel: acpiphp: Slot [4] registered Jul 2 00:15:32.031723 kernel: acpiphp: Slot [5] registered Jul 2 00:15:32.031739 kernel: acpiphp: Slot [6] registered Jul 2 00:15:32.031754 kernel: acpiphp: Slot [7] registered Jul 2 00:15:32.031769 kernel: acpiphp: Slot [8] registered Jul 2 00:15:32.031785 kernel: acpiphp: Slot [9] registered Jul 2 00:15:32.031800 kernel: acpiphp: Slot [10] registered Jul 2 00:15:32.031816 kernel: acpiphp: Slot [11] registered Jul 2 00:15:32.031831 kernel: acpiphp: Slot [12] registered Jul 2 00:15:32.031847 kernel: acpiphp: Slot [13] registered Jul 2 00:15:32.031865 kernel: acpiphp: Slot [14] registered Jul 2 00:15:32.031880 kernel: acpiphp: Slot [15] registered Jul 2 00:15:32.031896 kernel: acpiphp: Slot [16] registered Jul 2 00:15:32.031911 kernel: acpiphp: Slot [17] registered Jul 2 00:15:32.031927 kernel: acpiphp: Slot [18] registered Jul 2 00:15:32.031942 kernel: acpiphp: Slot [19] registered Jul 2 00:15:32.031958 kernel: acpiphp: Slot [20] registered Jul 2 00:15:32.031972 kernel: acpiphp: Slot [21] registered Jul 2 00:15:32.031987 kernel: acpiphp: Slot [22] registered Jul 2 00:15:32.032005 kernel: acpiphp: Slot [23] registered Jul 2 00:15:32.032019 kernel: acpiphp: Slot [24] registered Jul 2 00:15:32.032045 kernel: acpiphp: Slot [25] registered Jul 2 00:15:32.032060 kernel: acpiphp: Slot [26] registered Jul 2 00:15:32.032075 kernel: acpiphp: Slot [27] registered Jul 2 00:15:32.032091 kernel: acpiphp: Slot [28] registered Jul 2 00:15:32.032106 kernel: acpiphp: Slot [29] registered Jul 2 00:15:32.032122 kernel: acpiphp: Slot [30] registered Jul 2 00:15:32.032138 kernel: acpiphp: Slot [31] registered Jul 2 00:15:32.032154 kernel: PCI host bridge to bus 0000:00 Jul 2 00:15:32.032528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:15:32.034269 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:15:32.034423 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:15:32.034551 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:15:32.034673 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:15:32.034825 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:15:32.034972 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:15:32.035124 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 00:15:32.035256 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:15:32.035404 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:15:32.035536 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 00:15:32.037581 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 00:15:32.038376 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 00:15:32.038679 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 00:15:32.038848 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 00:15:32.039107 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 00:15:32.039281 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 00:15:32.039444 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 00:15:32.039708 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:15:32.039850 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:15:32.039999 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:15:32.040167 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 00:15:32.040308 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:15:32.040535 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 00:15:32.040559 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:15:32.040576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:15:32.040662 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:15:32.040686 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:15:32.040703 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:15:32.040720 kernel: iommu: Default domain type: Translated Jul 2 00:15:32.040736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:15:32.040753 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:15:32.040769 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:15:32.040786 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:15:32.040802 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 00:15:32.040954 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 00:15:32.041305 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 00:15:32.041820 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:15:32.041904 kernel: vgaarb: loaded Jul 2 00:15:32.041924 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 00:15:32.041942 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 00:15:32.041959 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:15:32.041977 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:15:32.041994 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:15:32.042015 kernel: pnp: PnP ACPI init Jul 2 00:15:32.042032 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:15:32.042049 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:15:32.042064 kernel: NET: Registered PF_INET protocol family Jul 2 00:15:32.042081 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:15:32.042098 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:15:32.042114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:15:32.042132 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:15:32.042149 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:15:32.042213 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:15:32.042232 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:15:32.042249 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:15:32.042265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:15:32.042282 kernel: NET: Registered PF_XDP protocol family Jul 2 00:15:32.042557 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:15:32.042682 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:15:32.042802 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:15:32.042923 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:15:32.043061 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:15:32.043082 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:15:32.043099 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:15:32.043114 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:15:32.043132 kernel: clocksource: Switched to clocksource tsc Jul 2 00:15:32.043147 kernel: Initialise system trusted keyrings Jul 2 00:15:32.043163 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:15:32.043178 kernel: Key type asymmetric registered Jul 2 00:15:32.043197 kernel: Asymmetric key parser 'x509' registered Jul 2 00:15:32.043212 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:15:32.043227 kernel: io scheduler mq-deadline registered Jul 2 00:15:32.043242 kernel: io scheduler kyber registered Jul 2 00:15:32.043257 kernel: io scheduler bfq registered Jul 2 00:15:32.043274 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:15:32.043290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:15:32.043305 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:15:32.043320 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:15:32.043353 kernel: i8042: Warning: Keylock active Jul 2 00:15:32.043367 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:15:32.043380 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:15:32.043532 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 00:15:32.043658 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 00:15:32.043781 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T00:15:31 UTC (1719879331) Jul 2 00:15:32.043906 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 00:15:32.043929 kernel: intel_pstate: CPU model not supported Jul 2 00:15:32.043944 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:15:32.043958 kernel: Segment Routing with IPv6 Jul 2 00:15:32.043972 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:15:32.043988 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:15:32.044003 kernel: Key type dns_resolver registered Jul 2 00:15:32.044017 kernel: IPI shorthand broadcast: enabled Jul 2 00:15:32.044053 kernel: sched_clock: Marking stable (619002417, 297373095)->(1003442819, -87067307) Jul 2 00:15:32.044068 kernel: registered taskstats version 1 Jul 2 00:15:32.044082 kernel: Loading compiled-in X.509 certificates Jul 2 00:15:32.044100 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:15:32.044114 kernel: Key type .fscrypt registered Jul 2 00:15:32.044129 kernel: Key type fscrypt-provisioning registered Jul 2 00:15:32.044143 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:15:32.044158 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:15:32.044173 kernel: ima: No architecture policies found Jul 2 00:15:32.044187 kernel: clk: Disabling unused clocks Jul 2 00:15:32.044201 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:15:32.044219 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:15:32.044233 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:15:32.044247 kernel: Run /init as init process Jul 2 00:15:32.044261 kernel: with arguments: Jul 2 00:15:32.044276 kernel: /init Jul 2 00:15:32.044290 kernel: with environment: Jul 2 00:15:32.044304 kernel: HOME=/ Jul 2 00:15:32.044319 kernel: TERM=linux Jul 2 00:15:32.044353 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:15:32.044372 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:15:32.044390 systemd[1]: Detected virtualization amazon. Jul 2 00:15:32.044425 systemd[1]: Detected architecture x86-64. Jul 2 00:15:32.044440 systemd[1]: Running in initrd. Jul 2 00:15:32.045374 systemd[1]: No hostname configured, using default hostname. Jul 2 00:15:32.045397 systemd[1]: Hostname set to . Jul 2 00:15:32.045414 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:15:32.045431 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:15:32.045447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:15:32.045464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:15:32.045483 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:15:32.045501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:15:32.045518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:15:32.045538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:15:32.045558 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:15:32.045575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:15:32.045653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:15:32.045670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:15:32.045688 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:15:32.045705 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:15:32.045780 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:15:32.045799 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:15:32.045957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:15:32.045979 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:15:32.045996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:15:32.046241 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:15:32.046263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:15:32.046283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:15:32.046349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:15:32.046374 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:15:32.046431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:15:32.046450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:15:32.046503 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:15:32.046524 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:15:32.046544 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:15:32.046598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:15:32.046623 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:15:32.046676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:15:32.046918 systemd-journald[178]: Collecting audit messages is disabled. Jul 2 00:15:32.046969 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:15:32.046989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:15:32.047007 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:15:32.047028 systemd-journald[178]: Journal started Jul 2 00:15:32.047069 systemd-journald[178]: Runtime Journal (/run/log/journal/ec225c56818c61d9ae95ca0a7fd5a103) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:15:32.049788 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:15:32.057653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:15:32.074036 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 00:15:32.222733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:15:32.222772 kernel: Bridge firewalling registered Jul 2 00:15:32.077684 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:15:32.127928 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 00:15:32.220758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:15:32.241957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:15:32.247103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:15:32.260641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:15:32.266508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:15:32.275611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:15:32.282476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:15:32.296469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:15:32.301529 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:15:32.319410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:15:32.323512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:15:32.333782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:15:32.369872 dracut-cmdline[216]: dracut-dracut-053 Jul 2 00:15:32.374221 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:15:32.385155 systemd-resolved[209]: Positive Trust Anchors: Jul 2 00:15:32.385179 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:15:32.385231 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:15:32.417467 systemd-resolved[209]: Defaulting to hostname 'linux'. Jul 2 00:15:32.427666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:15:32.433607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:15:32.512360 kernel: SCSI subsystem initialized Jul 2 00:15:32.529364 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:15:32.546364 kernel: iscsi: registered transport (tcp) Jul 2 00:15:32.579434 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:15:32.579519 kernel: QLogic iSCSI HBA Driver Jul 2 00:15:32.626104 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:15:32.632696 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:15:32.697409 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:15:32.697495 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:15:32.698888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:15:32.751112 kernel: raid6: avx512x4 gen() 10702 MB/s Jul 2 00:15:32.771381 kernel: raid6: avx512x2 gen() 6000 MB/s Jul 2 00:15:32.789407 kernel: raid6: avx512x1 gen() 6344 MB/s Jul 2 00:15:32.806449 kernel: raid6: avx2x4 gen() 13350 MB/s Jul 2 00:15:32.823381 kernel: raid6: avx2x2 gen() 15968 MB/s Jul 2 00:15:32.840386 kernel: raid6: avx2x1 gen() 12807 MB/s Jul 2 00:15:32.840469 kernel: raid6: using algorithm avx2x2 gen() 15968 MB/s Jul 2 00:15:32.858476 kernel: raid6: .... xor() 8749 MB/s, rmw enabled Jul 2 00:15:32.858565 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:15:32.915361 kernel: xor: automatically using best checksumming function avx Jul 2 00:15:33.242424 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:15:33.274966 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:15:33.287553 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:15:33.325775 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jul 2 00:15:33.343732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:15:33.380664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:15:33.403311 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jul 2 00:15:33.457212 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:15:33.463630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:15:33.538813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:15:33.554713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:15:33.587520 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:15:33.591905 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:15:33.596236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:15:33.598064 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:15:33.614564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:15:33.649611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:15:33.702354 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:15:33.718624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:15:33.720456 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:15:33.724959 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:15:33.729019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:15:33.738423 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:15:33.764404 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:15:33.764755 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:15:33.764786 kernel: AES CTR mode by8 optimization enabled Jul 2 00:15:33.764805 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 00:15:33.764989 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:5d:2e:fd:a5:21 Jul 2 00:15:33.729257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:15:33.733789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:15:33.753861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:15:33.782059 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:15:33.806321 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:15:33.806691 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:15:33.820459 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:15:33.826454 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:15:33.826518 kernel: GPT:9289727 != 16777215 Jul 2 00:15:33.826536 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:15:33.826555 kernel: GPT:9289727 != 16777215 Jul 2 00:15:33.826571 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:15:33.826588 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:15:33.962353 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (454) Jul 2 00:15:33.987519 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (443) Jul 2 00:15:34.017265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:15:34.027609 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:15:34.082514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:15:34.132770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:15:34.153004 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:15:34.211706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:15:34.224093 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:15:34.227856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:15:34.236939 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:15:34.266034 disk-uuid[627]: Primary Header is updated. Jul 2 00:15:34.266034 disk-uuid[627]: Secondary Entries is updated. Jul 2 00:15:34.266034 disk-uuid[627]: Secondary Header is updated. Jul 2 00:15:34.285053 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:15:34.294362 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:15:34.302369 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:15:35.301410 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:15:35.301505 disk-uuid[628]: The operation has completed successfully. Jul 2 00:15:35.530445 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:15:35.530687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:15:35.575557 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:15:35.595015 sh[969]: Success Jul 2 00:15:35.653578 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:15:35.776233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:15:35.788459 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:15:35.791933 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:15:35.851677 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:15:35.851752 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:15:35.851782 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:15:35.852848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:15:35.854375 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:15:35.971361 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:15:36.003184 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:15:36.004059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:15:36.011526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:15:36.018559 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:15:36.034367 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:15:36.034439 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:15:36.034458 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:15:36.042370 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:15:36.061994 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:15:36.061262 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:15:36.082964 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:15:36.093686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:15:36.185186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:15:36.205567 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:15:36.245044 systemd-networkd[1161]: lo: Link UP Jul 2 00:15:36.246107 systemd-networkd[1161]: lo: Gained carrier Jul 2 00:15:36.250291 systemd-networkd[1161]: Enumeration completed Jul 2 00:15:36.250790 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:15:36.250795 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:15:36.252151 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:15:36.258638 systemd[1]: Reached target network.target - Network. Jul 2 00:15:36.268038 systemd-networkd[1161]: eth0: Link UP Jul 2 00:15:36.268050 systemd-networkd[1161]: eth0: Gained carrier Jul 2 00:15:36.268123 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:15:36.286435 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.23.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:15:36.585211 ignition[1086]: Ignition 2.18.0 Jul 2 00:15:36.585227 ignition[1086]: Stage: fetch-offline Jul 2 00:15:36.587758 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:15:36.585574 ignition[1086]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:36.585589 ignition[1086]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:36.586263 ignition[1086]: Ignition finished successfully Jul 2 00:15:36.598663 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:15:36.623789 ignition[1171]: Ignition 2.18.0 Jul 2 00:15:36.623804 ignition[1171]: Stage: fetch Jul 2 00:15:36.624292 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:36.624305 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:36.624482 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:36.643971 ignition[1171]: PUT result: OK Jul 2 00:15:36.647760 ignition[1171]: parsed url from cmdline: "" Jul 2 00:15:36.647886 ignition[1171]: no config URL provided Jul 2 00:15:36.647895 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:15:36.647908 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:15:36.648017 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:36.657204 ignition[1171]: PUT result: OK Jul 2 00:15:36.657344 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:15:36.661530 ignition[1171]: GET result: OK Jul 2 00:15:36.661656 ignition[1171]: parsing config with SHA512: 55ad369e4742a238309c862e24cdaede7074dbdb380515f27b5d1c2b9beed30edae27cd7a1c455927c9313910a432e207ca888ddb6902e8abd772566bde8376f Jul 2 00:15:36.670215 unknown[1171]: fetched base config from "system" Jul 2 00:15:36.670708 unknown[1171]: fetched base config from "system" Jul 2 00:15:36.672008 ignition[1171]: fetch: fetch complete Jul 2 00:15:36.670724 unknown[1171]: fetched user config from "aws" Jul 2 00:15:36.672017 ignition[1171]: fetch: fetch passed Jul 2 00:15:36.674528 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:15:36.672088 ignition[1171]: Ignition finished successfully Jul 2 00:15:36.687789 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:15:36.710107 ignition[1178]: Ignition 2.18.0 Jul 2 00:15:36.710124 ignition[1178]: Stage: kargs Jul 2 00:15:36.710791 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:36.710804 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:36.710910 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:36.715118 ignition[1178]: PUT result: OK Jul 2 00:15:36.721060 ignition[1178]: kargs: kargs passed Jul 2 00:15:36.721141 ignition[1178]: Ignition finished successfully Jul 2 00:15:36.723223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:15:36.730762 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:15:36.750953 ignition[1185]: Ignition 2.18.0 Jul 2 00:15:36.750968 ignition[1185]: Stage: disks Jul 2 00:15:36.752339 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:36.752390 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:36.752573 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:36.757036 ignition[1185]: PUT result: OK Jul 2 00:15:36.761462 ignition[1185]: disks: disks passed Jul 2 00:15:36.761636 ignition[1185]: Ignition finished successfully Jul 2 00:15:36.765585 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:15:36.766892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:15:36.770829 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:15:36.773575 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:15:36.776506 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:15:36.779154 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:15:36.785826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:15:36.846872 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:15:36.853371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:15:36.862909 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:15:37.046732 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:15:37.047352 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:15:37.050069 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:15:37.068676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:15:37.109621 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:15:37.111725 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:15:37.111802 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:15:37.111841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:15:37.129587 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:15:37.145231 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Jul 2 00:15:37.150038 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:15:37.150248 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:15:37.150289 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:15:37.151722 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:15:37.161822 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:15:37.166094 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:15:37.621433 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:15:37.661793 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:15:37.687539 systemd-networkd[1161]: eth0: Gained IPv6LL Jul 2 00:15:37.706189 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:15:37.732990 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:15:38.085894 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:15:38.092472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:15:38.097559 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:15:38.108601 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:15:38.110895 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:15:38.150661 ignition[1325]: INFO : Ignition 2.18.0 Jul 2 00:15:38.150661 ignition[1325]: INFO : Stage: mount Jul 2 00:15:38.153689 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:38.153689 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:38.153689 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:38.153613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:15:38.161566 ignition[1325]: INFO : PUT result: OK Jul 2 00:15:38.167062 ignition[1325]: INFO : mount: mount passed Jul 2 00:15:38.168500 ignition[1325]: INFO : Ignition finished successfully Jul 2 00:15:38.169841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:15:38.175527 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:15:38.197578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:15:38.214248 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Jul 2 00:15:38.214314 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:15:38.214349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:15:38.215988 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:15:38.220349 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:15:38.222077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:15:38.252888 ignition[1356]: INFO : Ignition 2.18.0 Jul 2 00:15:38.252888 ignition[1356]: INFO : Stage: files Jul 2 00:15:38.255113 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:38.255113 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:38.255113 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:38.261009 ignition[1356]: INFO : PUT result: OK Jul 2 00:15:38.265068 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:15:38.279047 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:15:38.279047 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:15:38.329517 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:15:38.331439 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:15:38.333818 unknown[1356]: wrote ssh authorized keys file for user: core Jul 2 00:15:38.336807 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:15:38.340373 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:15:38.343307 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:15:38.420857 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:15:38.548023 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:15:38.548023 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:15:38.553033 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:15:38.553033 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:15:38.557510 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:15:38.557510 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:15:38.564671 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:15:39.025865 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:15:39.348680 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:15:39.348680 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:15:39.354570 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:15:39.357237 ignition[1356]: INFO : files: files passed Jul 2 00:15:39.357237 ignition[1356]: INFO : Ignition finished successfully Jul 2 00:15:39.369542 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:15:39.384622 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:15:39.398738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:15:39.414752 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:15:39.414889 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:15:39.462118 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:15:39.462118 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:15:39.469098 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:15:39.472207 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:15:39.476912 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:15:39.484526 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:15:39.551585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:15:39.551772 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:15:39.560304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:15:39.564341 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:15:39.567453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:15:39.573994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:15:39.628234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:15:39.641569 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:15:39.667911 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:15:39.668245 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:15:39.671858 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:15:39.672929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:15:39.673291 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:15:39.676346 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:15:39.677997 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:15:39.680847 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:15:39.687548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:15:39.696015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:15:39.701386 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:15:39.711240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:15:39.711498 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:15:39.711658 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:15:39.711848 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:15:39.712264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:15:39.712473 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:15:39.714099 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:15:39.714709 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:15:39.715410 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:15:39.721646 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:15:39.724907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:15:39.727449 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:15:39.732949 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:15:39.733100 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:15:39.739522 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:15:39.740160 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:15:39.748114 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:15:39.761655 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:15:39.763342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:15:39.763869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:15:39.766362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:15:39.766473 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:15:39.771248 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:15:39.771356 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:15:39.791873 ignition[1409]: INFO : Ignition 2.18.0 Jul 2 00:15:39.791873 ignition[1409]: INFO : Stage: umount Jul 2 00:15:39.794123 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:15:39.794123 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:15:39.794123 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:15:39.799942 ignition[1409]: INFO : PUT result: OK Jul 2 00:15:39.799292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:15:39.802520 ignition[1409]: INFO : umount: umount passed Jul 2 00:15:39.803462 ignition[1409]: INFO : Ignition finished successfully Jul 2 00:15:39.807001 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:15:39.807129 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:15:39.810042 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:15:39.810142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:15:39.823801 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:15:39.823909 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:15:39.827930 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:15:39.830218 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:15:39.830553 systemd[1]: Stopped target network.target - Network. Jul 2 00:15:39.844434 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:15:39.844540 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:15:39.850415 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:15:39.857203 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:15:39.861410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:15:39.861556 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:15:39.882522 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:15:39.884114 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:15:39.884179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:15:39.896049 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:15:39.896126 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:15:39.907153 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:15:39.907256 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:15:39.919602 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:15:39.919679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:15:39.927900 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:15:39.937663 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:15:39.946476 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:15:39.946651 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:15:39.948392 systemd-networkd[1161]: eth0: DHCPv6 lease lost Jul 2 00:15:39.956097 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:15:39.956249 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:15:39.959572 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:15:39.959631 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:15:39.962207 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:15:39.962283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:15:39.984627 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:15:39.994287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:15:39.996679 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:15:40.006620 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:15:40.023482 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:15:40.023913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:15:40.033048 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:15:40.033272 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:15:40.051284 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:15:40.051416 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:15:40.053714 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:15:40.055314 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:15:40.057169 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:15:40.057242 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:15:40.057902 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:15:40.057955 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:15:40.058349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:15:40.058396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:15:40.066669 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:15:40.072966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:15:40.073057 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:15:40.078289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:15:40.080140 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:15:40.084966 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:15:40.088935 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:15:40.112637 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:15:40.112773 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:15:40.117466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:15:40.117581 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:15:40.120771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:15:40.120832 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:15:40.124575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:15:40.124658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:15:40.128157 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:15:40.128297 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:15:40.131049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:15:40.131161 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:15:40.135080 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:15:40.151112 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:15:40.161204 systemd[1]: Switching root. Jul 2 00:15:40.215376 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 00:15:40.215460 systemd-journald[178]: Journal stopped Jul 2 00:15:43.654453 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:15:43.654549 kernel: SELinux: policy capability open_perms=1 Jul 2 00:15:43.654571 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:15:43.654597 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:15:43.654616 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:15:43.654641 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:15:43.654857 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:15:43.654889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:15:43.654909 kernel: audit: type=1403 audit(1719879341.641:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:15:43.654929 systemd[1]: Successfully loaded SELinux policy in 73.451ms. Jul 2 00:15:43.654961 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.519ms. Jul 2 00:15:43.654982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:15:43.655000 systemd[1]: Detected virtualization amazon. Jul 2 00:15:43.655023 systemd[1]: Detected architecture x86-64. Jul 2 00:15:43.655046 systemd[1]: Detected first boot. Jul 2 00:15:43.655064 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:15:43.655083 zram_generator::config[1452]: No configuration found. Jul 2 00:15:43.655104 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:15:43.655123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:15:43.655142 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:15:43.655162 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:15:43.655184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:15:43.655209 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:15:43.655231 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:15:43.655253 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:15:43.655274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:15:43.655295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:15:43.655317 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:15:43.655363 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:15:43.655384 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:15:43.655404 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:15:43.655615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:15:43.655641 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:15:43.655773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:15:43.655800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:15:43.655821 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:15:43.655841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:15:43.655862 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:15:43.655934 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:15:43.656082 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:15:43.656113 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:15:43.656134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:15:43.656155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:15:43.656177 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:15:43.656198 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:15:43.656221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:15:43.656242 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:15:43.656709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:15:43.656750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:15:43.656773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:15:43.656796 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:15:43.656818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:15:43.658180 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:15:43.658229 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:15:43.658253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:43.658275 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:15:43.658302 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:15:43.658361 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:15:43.658384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:15:43.658405 systemd[1]: Reached target machines.target - Containers. Jul 2 00:15:43.658425 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:15:43.658444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:15:43.658466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:15:43.658487 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:15:43.658509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:15:43.658534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:15:43.658555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:15:43.658576 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:15:43.658597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:15:43.658619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:15:43.658640 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:15:43.658660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:15:43.658681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:15:43.658705 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:15:43.658726 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:15:43.658747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:15:43.658768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:15:43.658788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:15:43.658809 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:15:43.658830 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:15:43.658851 systemd[1]: Stopped verity-setup.service. Jul 2 00:15:43.658871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:43.658895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:15:43.658913 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:15:43.658932 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:15:43.658951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:15:43.658972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:15:43.658990 kernel: loop: module loaded Jul 2 00:15:43.659015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:15:43.659035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:15:43.659055 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:15:43.659077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:15:43.659103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:15:43.659123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:15:43.659144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:15:43.659164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:15:43.659189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:15:43.659252 systemd-journald[1523]: Collecting audit messages is disabled. Jul 2 00:15:43.659311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:15:43.663433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:15:43.663476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:15:43.663504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:15:43.663526 systemd-journald[1523]: Journal started Jul 2 00:15:43.663567 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec225c56818c61d9ae95ca0a7fd5a103) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:15:43.050975 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:15:43.102221 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:15:43.102641 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:15:43.677422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:15:43.704388 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:15:43.711768 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:15:43.711854 kernel: fuse: init (API version 7.39) Jul 2 00:15:43.716094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:15:43.719582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:15:43.721950 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:15:43.740392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:15:43.740647 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:15:43.759495 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:15:43.779439 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:15:43.781453 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:15:43.781517 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:15:43.785256 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:15:43.804547 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:15:43.833167 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:15:43.835487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:15:43.895176 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:15:43.897184 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Jul 2 00:15:43.897206 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Jul 2 00:15:43.936477 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:15:43.938837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:15:43.950678 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:15:43.987147 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:15:43.998979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:15:44.001419 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:15:44.013735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:15:44.016160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:15:44.027887 kernel: ACPI: bus type drm_connector registered Jul 2 00:15:44.045319 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:15:44.048976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:15:44.056071 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:15:44.078035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:15:44.080674 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:15:44.101844 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:15:44.107581 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:15:44.141788 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:15:44.123035 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:15:44.146356 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:15:44.152981 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:15:44.158500 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec225c56818c61d9ae95ca0a7fd5a103 is 36.768ms for 972 entries. Jul 2 00:15:44.158500 systemd-journald[1523]: System Journal (/var/log/journal/ec225c56818c61d9ae95ca0a7fd5a103) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:15:44.209838 systemd-journald[1523]: Received client request to flush runtime journal. Jul 2 00:15:44.160541 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:15:44.211885 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:15:44.246503 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:15:44.263529 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:15:44.293967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:15:44.296313 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:15:44.329691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:15:44.335940 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 2 00:15:44.336413 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 2 00:15:44.366536 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:15:44.364346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:15:44.533420 kernel: loop2: detected capacity change from 0 to 210664 Jul 2 00:15:44.599373 kernel: loop3: detected capacity change from 0 to 60984 Jul 2 00:15:44.726364 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:15:44.783441 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:15:44.803356 kernel: loop6: detected capacity change from 0 to 210664 Jul 2 00:15:44.845550 kernel: loop7: detected capacity change from 0 to 60984 Jul 2 00:15:44.873742 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:15:44.874485 (sd-merge)[1609]: Merged extensions into '/usr'. Jul 2 00:15:44.897030 systemd[1]: Reloading requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:15:44.897465 systemd[1]: Reloading... Jul 2 00:15:45.051377 zram_generator::config[1630]: No configuration found. Jul 2 00:15:45.364168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:15:45.465275 systemd[1]: Reloading finished in 566 ms. Jul 2 00:15:45.538848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:15:45.560148 systemd[1]: Starting ensure-sysext.service... Jul 2 00:15:45.566801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:15:45.612913 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:15:45.612940 systemd[1]: Reloading... Jul 2 00:15:45.661048 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:15:45.662765 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:15:45.667298 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:15:45.667883 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 2 00:15:45.667988 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 2 00:15:45.676200 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:15:45.676220 systemd-tmpfiles[1682]: Skipping /boot Jul 2 00:15:45.702887 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:15:45.702909 systemd-tmpfiles[1682]: Skipping /boot Jul 2 00:15:45.733353 zram_generator::config[1705]: No configuration found. Jul 2 00:15:45.976258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:15:45.986061 ldconfig[1568]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:15:46.056754 systemd[1]: Reloading finished in 443 ms. Jul 2 00:15:46.077736 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:15:46.081738 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:15:46.091457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:15:46.122010 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:15:46.128674 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:15:46.147591 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:15:46.160217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:15:46.173081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:15:46.192698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:15:46.212732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.213025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:15:46.231774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:15:46.244878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:15:46.256591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:15:46.261245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:15:46.261764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.266690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:15:46.267024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:15:46.286881 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:15:46.292850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.293138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:15:46.306866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:15:46.308314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:15:46.308620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.320796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.321405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:15:46.336687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:15:46.338810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:15:46.339463 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:15:46.341022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:15:46.349445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:15:46.350234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:15:46.352936 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:15:46.353128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:15:46.363139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:15:46.363499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:15:46.378052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:15:46.378452 systemd-udevd[1768]: Using default interface naming scheme 'v255'. Jul 2 00:15:46.379116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:15:46.381874 systemd[1]: Finished ensure-sysext.service. Jul 2 00:15:46.397140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:15:46.397552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:15:46.406519 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:15:46.435557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:15:46.448802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:15:46.465078 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:15:46.468438 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:15:46.472715 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:15:46.481850 augenrules[1799]: No rules Jul 2 00:15:46.485954 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:15:46.534779 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:15:46.557702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:15:46.569582 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:15:46.708870 systemd-resolved[1765]: Positive Trust Anchors: Jul 2 00:15:46.709334 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:15:46.709512 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:15:46.724360 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1823) Jul 2 00:15:46.734035 systemd-resolved[1765]: Defaulting to hostname 'linux'. Jul 2 00:15:46.744817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:15:46.746563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:15:46.748299 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:15:46.778106 systemd-networkd[1810]: lo: Link UP Jul 2 00:15:46.778586 systemd-networkd[1810]: lo: Gained carrier Jul 2 00:15:46.780588 systemd-networkd[1810]: Enumeration completed Jul 2 00:15:46.781503 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:15:46.783572 systemd[1]: Reached target network.target - Network. Jul 2 00:15:46.793415 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:15:46.825224 (udev-worker)[1822]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:15:46.851659 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 00:15:46.871762 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:15:46.888118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 2 00:15:46.887548 systemd-networkd[1810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:15:46.887553 systemd-networkd[1810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:15:46.893239 systemd-networkd[1810]: eth0: Link UP Jul 2 00:15:46.894918 systemd-networkd[1810]: eth0: Gained carrier Jul 2 00:15:46.894956 systemd-networkd[1810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:15:46.897556 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:15:46.897631 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 2 00:15:46.906266 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 00:15:46.904591 systemd-networkd[1810]: eth0: DHCPv4 address 172.31.23.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:15:46.997383 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:15:47.007623 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1818) Jul 2 00:15:47.007517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:15:47.212118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:15:47.396218 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:15:47.414684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:15:47.421048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:15:47.428174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:15:47.471949 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:15:47.492540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:15:47.530216 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:15:47.534120 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:15:47.536198 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:15:47.539072 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:15:47.540512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:15:47.544797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:15:47.547553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:15:47.549761 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:15:47.555046 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:15:47.555088 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:15:47.557275 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:15:47.560212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:15:47.565772 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:15:47.574043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:15:47.576811 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:15:47.583920 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:15:47.586667 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:15:47.589215 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:15:47.594220 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:15:47.594257 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:15:47.601931 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:15:47.610895 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:15:47.622585 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:15:47.623894 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:15:47.634552 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:15:47.645660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:15:47.648280 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:15:47.658597 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:15:47.662863 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:15:47.676515 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:15:47.680743 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:15:47.692369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:15:47.698939 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:15:47.723573 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:15:47.725573 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:15:47.727502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:15:47.747145 jq[1938]: false Jul 2 00:15:47.759694 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:15:47.789491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:15:47.789933 jq[1949]: true Jul 2 00:15:47.799856 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:15:47.809608 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:15:47.812658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:15:47.851513 jq[1955]: true Jul 2 00:15:47.856152 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:15:47.922918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:15:47.925266 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:15:47.952449 update_engine[1948]: I0702 00:15:47.952139 1948 main.cc:92] Flatcar Update Engine starting Jul 2 00:15:47.964853 dbus-daemon[1937]: [system] SELinux support is enabled Jul 2 00:15:47.965098 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:15:47.971203 extend-filesystems[1939]: Found loop4 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found loop5 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found loop6 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found loop7 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found nvme0n1 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found nvme0n1p1 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found nvme0n1p2 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found nvme0n1p3 Jul 2 00:15:47.971203 extend-filesystems[1939]: Found usr Jul 2 00:15:47.989685 extend-filesystems[1939]: Found nvme0n1p4 Jul 2 00:15:47.989685 extend-filesystems[1939]: Found nvme0n1p6 Jul 2 00:15:47.989685 extend-filesystems[1939]: Found nvme0n1p7 Jul 2 00:15:47.989685 extend-filesystems[1939]: Found nvme0n1p9 Jul 2 00:15:47.989685 extend-filesystems[1939]: Checking size of /dev/nvme0n1p9 Jul 2 00:15:48.010017 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:15:48.010095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:15:48.012025 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: ---------------------------------------------------- Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: corporation. Support and training for ntp-4 are Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: available at https://www.nwtime.org/support Jul 2 00:15:48.015592 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: ---------------------------------------------------- Jul 2 00:15:48.014213 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:15:48.012056 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:15:48.014242 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:15:48.014253 ntpd[1941]: ---------------------------------------------------- Jul 2 00:15:48.014263 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:15:48.042581 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: proto: precision = 0.106 usec (-23) Jul 2 00:15:48.042581 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: basedate set to 2024-06-19 Jul 2 00:15:48.042581 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: gps base set to 2024-06-23 (week 2320) Jul 2 00:15:48.014273 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:15:48.014283 ntpd[1941]: corporation. Support and training for ntp-4 are Jul 2 00:15:48.014294 ntpd[1941]: available at https://www.nwtime.org/support Jul 2 00:15:48.014304 ntpd[1941]: ---------------------------------------------------- Jul 2 00:15:48.033907 ntpd[1941]: proto: precision = 0.106 usec (-23) Jul 2 00:15:48.040107 ntpd[1941]: basedate set to 2024-06-19 Jul 2 00:15:48.040133 ntpd[1941]: gps base set to 2024-06-23 (week 2320) Jul 2 00:15:48.053897 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:15:48.077201 dbus-daemon[1937]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1810 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listen normally on 3 eth0 172.31.23.160:123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listen normally on 4 lo [::1]:123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: bind(21) AF_INET6 fe80::45d:2eff:fefd:a521%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: unable to create socket on eth0 (5) for fe80::45d:2eff:fefd:a521%2#123 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: failed to init interface for address fe80::45d:2eff:fefd:a521%2 Jul 2 00:15:48.080676 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: Listening on routing socket on fd #21 for interface updates Jul 2 00:15:48.078951 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:15:48.079024 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:15:48.079255 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:15:48.079292 ntpd[1941]: Listen normally on 3 eth0 172.31.23.160:123 Jul 2 00:15:48.079361 ntpd[1941]: Listen normally on 4 lo [::1]:123 Jul 2 00:15:48.079409 ntpd[1941]: bind(21) AF_INET6 fe80::45d:2eff:fefd:a521%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:15:48.079432 ntpd[1941]: unable to create socket on eth0 (5) for fe80::45d:2eff:fefd:a521%2#123 Jul 2 00:15:48.079449 ntpd[1941]: failed to init interface for address fe80::45d:2eff:fefd:a521%2 Jul 2 00:15:48.079483 ntpd[1941]: Listening on routing socket on fd #21 for interface updates Jul 2 00:15:48.082118 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:15:48.094545 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:15:48.097622 update_engine[1948]: I0702 00:15:48.097566 1948 update_check_scheduler.cc:74] Next update check in 5m31s Jul 2 00:15:48.099271 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:15:48.101478 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:15:48.101478 ntpd[1941]: 2 Jul 00:15:48 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:15:48.099314 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:15:48.105666 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:15:48.130933 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:15:48.134027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:15:48.143662 tar[1953]: linux-amd64/helm Jul 2 00:15:48.158644 extend-filesystems[1939]: Resized partition /dev/nvme0n1p9 Jul 2 00:15:48.174023 extend-filesystems[2005]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:15:48.208363 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:15:48.240979 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:15:48.248259 coreos-metadata[1936]: Jul 02 00:15:48.247 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:15:48.256004 coreos-metadata[1936]: Jul 02 00:15:48.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:15:48.260404 coreos-metadata[1936]: Jul 02 00:15:48.256 INFO Fetch successful Jul 2 00:15:48.260404 coreos-metadata[1936]: Jul 02 00:15:48.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:15:48.265587 coreos-metadata[1936]: Jul 02 00:15:48.262 INFO Fetch successful Jul 2 00:15:48.265587 coreos-metadata[1936]: Jul 02 00:15:48.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.267 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.268 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.271 INFO Fetch failed with 404: resource not found Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.271 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.272 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.272 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.273 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.273 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.274 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.274 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.274 INFO Fetch successful Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.274 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:15:48.322679 coreos-metadata[1936]: Jul 02 00:15:48.275 INFO Fetch successful Jul 2 00:15:48.345598 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:15:48.367999 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1825) Jul 2 00:15:48.417795 extend-filesystems[2005]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:15:48.417795 extend-filesystems[2005]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:15:48.417795 extend-filesystems[2005]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:15:48.424270 extend-filesystems[1939]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:15:48.418972 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:15:48.421302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:15:48.424455 systemd-logind[1947]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 00:15:48.425137 systemd-logind[1947]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 2 00:15:48.425179 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:15:48.427841 systemd-logind[1947]: New seat seat0. Jul 2 00:15:48.456879 bash[2007]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:15:48.444717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:15:48.460896 systemd[1]: Starting sshkeys.service... Jul 2 00:15:48.462629 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:15:48.465902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:15:48.476882 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:15:48.503505 systemd-networkd[1810]: eth0: Gained IPv6LL Jul 2 00:15:48.525886 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:15:48.528352 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:15:48.548779 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:15:48.562637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:15:48.580015 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:15:48.646360 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:15:48.659600 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:15:48.719838 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:15:48.720060 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:15:48.721964 dbus-daemon[1937]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1995 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:15:48.734271 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:15:48.742362 sshd_keygen[1970]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:15:48.797024 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:15:48.860211 polkitd[2067]: Started polkitd version 121 Jul 2 00:15:48.870050 coreos-metadata[2063]: Jul 02 00:15:48.868 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:15:48.870050 coreos-metadata[2063]: Jul 02 00:15:48.869 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:15:48.879162 coreos-metadata[2063]: Jul 02 00:15:48.875 INFO Fetch successful Jul 2 00:15:48.879162 coreos-metadata[2063]: Jul 02 00:15:48.876 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:15:48.888415 coreos-metadata[2063]: Jul 02 00:15:48.883 INFO Fetch successful Jul 2 00:15:48.893456 unknown[2063]: wrote ssh authorized keys file for user: core Jul 2 00:15:48.896974 polkitd[2067]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:15:48.897063 polkitd[2067]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:15:48.911784 polkitd[2067]: Finished loading, compiling and executing 2 rules Jul 2 00:15:48.921367 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:15:48.926662 polkitd[2067]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:15:48.934952 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:15:48.976662 update-ssh-keys[2121]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:15:48.977919 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:15:48.983093 systemd[1]: Finished sshkeys.service. Jul 2 00:15:48.987946 amazon-ssm-agent[2050]: Initializing new seelog logger Jul 2 00:15:48.992930 amazon-ssm-agent[2050]: New Seelog Logger Creation Complete Jul 2 00:15:48.992930 amazon-ssm-agent[2050]: 2024/07/02 00:15:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:48.992930 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:48.992930 amazon-ssm-agent[2050]: 2024/07/02 00:15:48 processing appconfig overrides Jul 2 00:15:48.995240 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:15:48.998126 systemd-hostnamed[1995]: Hostname set to (transient) Jul 2 00:15:48.998451 systemd-resolved[1765]: System hostname changed to 'ip-172-31-23-160'. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 processing appconfig overrides Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 processing appconfig overrides Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO Proxy environment variables: Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:15:49.009389 amazon-ssm-agent[2050]: 2024/07/02 00:15:49 processing appconfig overrides Jul 2 00:15:49.039717 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:15:49.050384 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:15:49.059461 systemd[1]: Started sshd@0-172.31.23.160:22-147.75.109.163:49744.service - OpenSSH per-connection server daemon (147.75.109.163:49744). Jul 2 00:15:49.101846 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO https_proxy: Jul 2 00:15:49.128287 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:15:49.128545 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:15:49.141714 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:15:49.211798 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO http_proxy: Jul 2 00:15:49.238033 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:15:49.248957 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:15:49.253689 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:15:49.256583 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:15:49.316485 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO no_proxy: Jul 2 00:15:49.416213 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:15:49.472946 containerd[1977]: time="2024-07-02T00:15:49.472835094Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:15:49.494433 sshd[2148]: Accepted publickey for core from 147.75.109.163 port 49744 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:15:49.500028 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:49.515349 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:15:49.532410 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:15:49.542718 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:15:49.556397 systemd-logind[1947]: New session 1 of user core. Jul 2 00:15:49.589531 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:15:49.602731 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:15:49.615946 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO Agent will take identity from EC2 Jul 2 00:15:49.621566 (systemd)[2178]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:49.659620 containerd[1977]: time="2024-07-02T00:15:49.659115428Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:15:49.659620 containerd[1977]: time="2024-07-02T00:15:49.659194160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.667876 containerd[1977]: time="2024-07-02T00:15:49.667083572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:15:49.667876 containerd[1977]: time="2024-07-02T00:15:49.667467028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.672423 containerd[1977]: time="2024-07-02T00:15:49.671760787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:15:49.672423 containerd[1977]: time="2024-07-02T00:15:49.671837102Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:15:49.675642 containerd[1977]: time="2024-07-02T00:15:49.675581955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.675779 containerd[1977]: time="2024-07-02T00:15:49.675739439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:15:49.675908 containerd[1977]: time="2024-07-02T00:15:49.675782795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.676348 containerd[1977]: time="2024-07-02T00:15:49.676102915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.677202 containerd[1977]: time="2024-07-02T00:15:49.676867064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.677202 containerd[1977]: time="2024-07-02T00:15:49.676916680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:15:49.677202 containerd[1977]: time="2024-07-02T00:15:49.676983766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:15:49.681696 containerd[1977]: time="2024-07-02T00:15:49.681429834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:15:49.684197 containerd[1977]: time="2024-07-02T00:15:49.681827809Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:15:49.684197 containerd[1977]: time="2024-07-02T00:15:49.683486974Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:15:49.684197 containerd[1977]: time="2024-07-02T00:15:49.683517866Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:15:49.700449 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:15:49.701137 containerd[1977]: time="2024-07-02T00:15:49.700073897Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:15:49.704367 containerd[1977]: time="2024-07-02T00:15:49.701399845Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [Registrar] Starting registrar module Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:15:49.704465 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.710967714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711030562Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711055298Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711072556Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711092931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711318454Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711355125Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711376104Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711397658Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711419407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711445162Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711465769Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711485335Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713430 containerd[1977]: time="2024-07-02T00:15:49.711508540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.711529406Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.711548908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.711567427Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.711708841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712054746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712088967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712110779Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712145666Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712215343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712233155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712252449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712270548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.713995 containerd[1977]: time="2024-07-02T00:15:49.712299711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.712319197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.716871100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.716912115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.716939682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717141736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717168860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717190030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717214334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717234316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717259423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.717777 containerd[1977]: time="2024-07-02T00:15:49.717292454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.718157 containerd[1977]: time="2024-07-02T00:15:49.718094072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:15:49.718483 amazon-ssm-agent[2050]: 2024-07-02 00:15:49 INFO [CredentialRefresher] Next credential rotation will be in 32.09147494423333 minutes Jul 2 00:15:49.718971 containerd[1977]: time="2024-07-02T00:15:49.718862877Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:15:49.718971 containerd[1977]: time="2024-07-02T00:15:49.718966710Z" level=info msg="Connect containerd service" Jul 2 00:15:49.719240 containerd[1977]: time="2024-07-02T00:15:49.719017701Z" level=info msg="using legacy CRI server" Jul 2 00:15:49.719240 containerd[1977]: time="2024-07-02T00:15:49.719028709Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:15:49.719240 containerd[1977]: time="2024-07-02T00:15:49.719159550Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:15:49.724014 containerd[1977]: time="2024-07-02T00:15:49.723542947Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:15:49.724014 containerd[1977]: time="2024-07-02T00:15:49.723614247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:15:49.724014 containerd[1977]: time="2024-07-02T00:15:49.723642815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:15:49.724014 containerd[1977]: time="2024-07-02T00:15:49.723664227Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:15:49.724014 containerd[1977]: time="2024-07-02T00:15:49.723688235Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:15:49.724227 containerd[1977]: time="2024-07-02T00:15:49.724068913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:15:49.724227 containerd[1977]: time="2024-07-02T00:15:49.724127825Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:15:49.724227 containerd[1977]: time="2024-07-02T00:15:49.724199970Z" level=info msg="Start subscribing containerd event" Jul 2 00:15:49.724322 containerd[1977]: time="2024-07-02T00:15:49.724251874Z" level=info msg="Start recovering state" Jul 2 00:15:49.729349 containerd[1977]: time="2024-07-02T00:15:49.725562596Z" level=info msg="Start event monitor" Jul 2 00:15:49.729349 containerd[1977]: time="2024-07-02T00:15:49.725592514Z" level=info msg="Start snapshots syncer" Jul 2 00:15:49.729349 containerd[1977]: time="2024-07-02T00:15:49.725607418Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:15:49.729349 containerd[1977]: time="2024-07-02T00:15:49.725621118Z" level=info msg="Start streaming server" Jul 2 00:15:49.729349 containerd[1977]: time="2024-07-02T00:15:49.725704669Z" level=info msg="containerd successfully booted in 0.265587s" Jul 2 00:15:49.726178 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:15:49.877002 systemd[2178]: Queued start job for default target default.target. Jul 2 00:15:49.884069 systemd[2178]: Created slice app.slice - User Application Slice. Jul 2 00:15:49.884115 systemd[2178]: Reached target paths.target - Paths. Jul 2 00:15:49.884136 systemd[2178]: Reached target timers.target - Timers. Jul 2 00:15:49.888610 systemd[2178]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:15:49.942799 systemd[2178]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:15:49.943131 systemd[2178]: Reached target sockets.target - Sockets. Jul 2 00:15:49.943274 systemd[2178]: Reached target basic.target - Basic System. Jul 2 00:15:49.943729 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:15:49.944204 systemd[2178]: Reached target default.target - Main User Target. Jul 2 00:15:49.944518 systemd[2178]: Startup finished in 306ms. Jul 2 00:15:49.957901 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:15:50.104532 tar[1953]: linux-amd64/LICENSE Jul 2 00:15:50.104532 tar[1953]: linux-amd64/README.md Jul 2 00:15:50.171675 systemd[1]: Started sshd@1-172.31.23.160:22-147.75.109.163:49754.service - OpenSSH per-connection server daemon (147.75.109.163:49754). Jul 2 00:15:50.174364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:15:50.376300 sshd[2194]: Accepted publickey for core from 147.75.109.163 port 49754 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:15:50.377517 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:50.385581 systemd-logind[1947]: New session 2 of user core. Jul 2 00:15:50.394108 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:15:50.521659 sshd[2194]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:50.530029 systemd[1]: sshd@1-172.31.23.160:22-147.75.109.163:49754.service: Deactivated successfully. Jul 2 00:15:50.538317 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:15:50.543457 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:15:50.587814 systemd[1]: Started sshd@2-172.31.23.160:22-147.75.109.163:49758.service - OpenSSH per-connection server daemon (147.75.109.163:49758). Jul 2 00:15:50.592274 systemd-logind[1947]: Removed session 2. Jul 2 00:15:50.738117 amazon-ssm-agent[2050]: 2024-07-02 00:15:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:15:50.772188 sshd[2202]: Accepted publickey for core from 147.75.109.163 port 49758 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:15:50.776730 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:50.790411 systemd-logind[1947]: New session 3 of user core. Jul 2 00:15:50.795822 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:15:50.835754 amazon-ssm-agent[2050]: 2024-07-02 00:15:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2205) started Jul 2 00:15:50.932686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:15:50.935275 sshd[2202]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:50.936083 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:15:50.938945 amazon-ssm-agent[2050]: 2024-07-02 00:15:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:15:50.938358 systemd[1]: Startup finished in 776ms (kernel) + 9.890s (initrd) + 9.366s (userspace) = 20.034s. Jul 2 00:15:50.952880 systemd[1]: sshd@2-172.31.23.160:22-147.75.109.163:49758.service: Deactivated successfully. Jul 2 00:15:50.953513 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:15:51.003951 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:15:51.014894 ntpd[1941]: Listen normally on 6 eth0 [fe80::45d:2eff:fefd:a521%2]:123 Jul 2 00:15:51.015711 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:15:51.023461 ntpd[1941]: 2 Jul 00:15:51 ntpd[1941]: Listen normally on 6 eth0 [fe80::45d:2eff:fefd:a521%2]:123 Jul 2 00:15:51.024659 systemd-logind[1947]: Removed session 3. Jul 2 00:15:51.828915 kubelet[2219]: E0702 00:15:51.828850 2219 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:15:51.833219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:15:51.833685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:15:51.834539 systemd[1]: kubelet.service: Consumed 1.056s CPU time. Jul 2 00:15:55.272738 systemd-resolved[1765]: Clock change detected. Flushing caches. Jul 2 00:16:01.235753 systemd[1]: Started sshd@3-172.31.23.160:22-147.75.109.163:54414.service - OpenSSH per-connection server daemon (147.75.109.163:54414). Jul 2 00:16:01.442660 sshd[2239]: Accepted publickey for core from 147.75.109.163 port 54414 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:01.444525 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:01.468913 systemd-logind[1947]: New session 4 of user core. Jul 2 00:16:01.484575 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:16:01.627807 sshd[2239]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:01.635842 systemd[1]: sshd@3-172.31.23.160:22-147.75.109.163:54414.service: Deactivated successfully. Jul 2 00:16:01.640753 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:16:01.645547 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:16:01.647785 systemd-logind[1947]: Removed session 4. Jul 2 00:16:01.688977 systemd[1]: Started sshd@4-172.31.23.160:22-147.75.109.163:54424.service - OpenSSH per-connection server daemon (147.75.109.163:54424). Jul 2 00:16:01.945454 sshd[2246]: Accepted publickey for core from 147.75.109.163 port 54424 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:01.950543 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:01.968296 systemd-logind[1947]: New session 5 of user core. Jul 2 00:16:01.978788 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:16:02.110404 sshd[2246]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:02.131088 systemd[1]: sshd@4-172.31.23.160:22-147.75.109.163:54424.service: Deactivated successfully. Jul 2 00:16:02.155560 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:16:02.157079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:16:02.158404 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:16:02.181748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:02.197695 systemd[1]: Started sshd@5-172.31.23.160:22-147.75.109.163:54428.service - OpenSSH per-connection server daemon (147.75.109.163:54428). Jul 2 00:16:02.204222 systemd-logind[1947]: Removed session 5. Jul 2 00:16:02.428305 sshd[2254]: Accepted publickey for core from 147.75.109.163 port 54428 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:02.431175 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:02.441902 systemd-logind[1947]: New session 6 of user core. Jul 2 00:16:02.453502 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:16:02.606577 sshd[2254]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:02.620819 systemd[1]: sshd@5-172.31.23.160:22-147.75.109.163:54428.service: Deactivated successfully. Jul 2 00:16:02.625015 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:16:02.637199 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:16:02.673808 systemd[1]: Started sshd@6-172.31.23.160:22-147.75.109.163:44346.service - OpenSSH per-connection server daemon (147.75.109.163:44346). Jul 2 00:16:02.683882 systemd-logind[1947]: Removed session 6. Jul 2 00:16:02.892660 sshd[2263]: Accepted publickey for core from 147.75.109.163 port 44346 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:02.899734 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:02.924287 systemd-logind[1947]: New session 7 of user core. Jul 2 00:16:02.937399 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:16:03.019490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:03.023425 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:16:03.120312 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:16:03.120747 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:03.136108 sudo[2276]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:03.138604 kubelet[2271]: E0702 00:16:03.138544 2271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:16:03.150274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:16:03.150471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:16:03.162911 sshd[2263]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:03.174491 systemd[1]: sshd@6-172.31.23.160:22-147.75.109.163:44346.service: Deactivated successfully. Jul 2 00:16:03.178596 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:16:03.181782 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:16:03.213720 systemd[1]: Started sshd@7-172.31.23.160:22-147.75.109.163:44350.service - OpenSSH per-connection server daemon (147.75.109.163:44350). Jul 2 00:16:03.219098 systemd-logind[1947]: Removed session 7. Jul 2 00:16:03.392411 sshd[2285]: Accepted publickey for core from 147.75.109.163 port 44350 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:03.393914 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:03.405401 systemd-logind[1947]: New session 8 of user core. Jul 2 00:16:03.412499 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:16:03.514555 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:16:03.514919 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:03.521120 sudo[2289]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:03.529915 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:16:03.530318 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:03.549628 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:16:03.552031 auditctl[2292]: No rules Jul 2 00:16:03.552533 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:16:03.552760 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:16:03.564076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:16:03.624603 augenrules[2310]: No rules Jul 2 00:16:03.628502 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:16:03.632829 sudo[2288]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:03.658756 sshd[2285]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:03.663013 systemd[1]: sshd@7-172.31.23.160:22-147.75.109.163:44350.service: Deactivated successfully. Jul 2 00:16:03.665006 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:16:03.666883 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:16:03.668177 systemd-logind[1947]: Removed session 8. Jul 2 00:16:03.698709 systemd[1]: Started sshd@8-172.31.23.160:22-147.75.109.163:44366.service - OpenSSH per-connection server daemon (147.75.109.163:44366). Jul 2 00:16:03.862579 sshd[2318]: Accepted publickey for core from 147.75.109.163 port 44366 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:16:03.864535 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:03.869354 systemd-logind[1947]: New session 9 of user core. Jul 2 00:16:03.879437 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:16:03.977491 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:16:03.977876 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:04.277617 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:16:04.277722 (dockerd)[2330]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:16:04.928669 dockerd[2330]: time="2024-07-02T00:16:04.928607262Z" level=info msg="Starting up" Jul 2 00:16:05.030667 dockerd[2330]: time="2024-07-02T00:16:05.030615713Z" level=info msg="Loading containers: start." Jul 2 00:16:05.311286 kernel: Initializing XFRM netlink socket Jul 2 00:16:05.352605 (udev-worker)[2342]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:16:05.470592 systemd-networkd[1810]: docker0: Link UP Jul 2 00:16:05.496701 dockerd[2330]: time="2024-07-02T00:16:05.496656506Z" level=info msg="Loading containers: done." Jul 2 00:16:05.664383 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1880289193-merged.mount: Deactivated successfully. Jul 2 00:16:05.672497 dockerd[2330]: time="2024-07-02T00:16:05.672443871Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:16:05.672718 dockerd[2330]: time="2024-07-02T00:16:05.672684397Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:16:05.672844 dockerd[2330]: time="2024-07-02T00:16:05.672814954Z" level=info msg="Daemon has completed initialization" Jul 2 00:16:05.749408 dockerd[2330]: time="2024-07-02T00:16:05.749285156Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:16:05.749517 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:16:07.724534 containerd[1977]: time="2024-07-02T00:16:07.724487105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:16:08.554787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094981461.mount: Deactivated successfully. Jul 2 00:16:11.941433 containerd[1977]: time="2024-07-02T00:16:11.941318051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:11.943309 containerd[1977]: time="2024-07-02T00:16:11.943256774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:16:11.944466 containerd[1977]: time="2024-07-02T00:16:11.944410168Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:11.949623 containerd[1977]: time="2024-07-02T00:16:11.949557919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:11.951801 containerd[1977]: time="2024-07-02T00:16:11.951722594Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 4.227186795s" Jul 2 00:16:11.952384 containerd[1977]: time="2024-07-02T00:16:11.952122319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:16:11.989296 containerd[1977]: time="2024-07-02T00:16:11.989252319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:16:13.400896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:16:13.410562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:13.727457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:13.753764 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:16:13.856991 kubelet[2532]: E0702 00:16:13.856895 2532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:16:13.860203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:16:13.860442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:16:15.425223 containerd[1977]: time="2024-07-02T00:16:15.425086789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:15.426864 containerd[1977]: time="2024-07-02T00:16:15.426679332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:16:15.431258 containerd[1977]: time="2024-07-02T00:16:15.429507919Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:15.438675 containerd[1977]: time="2024-07-02T00:16:15.438616374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:15.440721 containerd[1977]: time="2024-07-02T00:16:15.440668887Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 3.451358428s" Jul 2 00:16:15.441019 containerd[1977]: time="2024-07-02T00:16:15.440989304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:16:15.471620 containerd[1977]: time="2024-07-02T00:16:15.471573302Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:16:17.874130 containerd[1977]: time="2024-07-02T00:16:17.874071849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:17.905203 containerd[1977]: time="2024-07-02T00:16:17.905135248Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:16:17.945722 containerd[1977]: time="2024-07-02T00:16:17.945640939Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:18.000341 containerd[1977]: time="2024-07-02T00:16:17.999526245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:18.002041 containerd[1977]: time="2024-07-02T00:16:18.001487943Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 2.529861544s" Jul 2 00:16:18.002174 containerd[1977]: time="2024-07-02T00:16:18.002037595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:16:18.034914 containerd[1977]: time="2024-07-02T00:16:18.034874101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:16:19.296144 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:16:19.712381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684696616.mount: Deactivated successfully. Jul 2 00:16:20.543883 containerd[1977]: time="2024-07-02T00:16:20.543718880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:20.569245 containerd[1977]: time="2024-07-02T00:16:20.569135439Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:16:20.588928 containerd[1977]: time="2024-07-02T00:16:20.588844665Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:20.608907 containerd[1977]: time="2024-07-02T00:16:20.608755195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:20.610264 containerd[1977]: time="2024-07-02T00:16:20.610202766Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.575292279s" Jul 2 00:16:20.610391 containerd[1977]: time="2024-07-02T00:16:20.610263717Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:16:20.644701 containerd[1977]: time="2024-07-02T00:16:20.644660198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:16:21.344274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621303190.mount: Deactivated successfully. Jul 2 00:16:23.095462 containerd[1977]: time="2024-07-02T00:16:23.095399581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.099310 containerd[1977]: time="2024-07-02T00:16:23.098113225Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:16:23.102057 containerd[1977]: time="2024-07-02T00:16:23.101352511Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.106462 containerd[1977]: time="2024-07-02T00:16:23.106407709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.109470 containerd[1977]: time="2024-07-02T00:16:23.109309373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.464471501s" Jul 2 00:16:23.109605 containerd[1977]: time="2024-07-02T00:16:23.109481694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:16:23.162211 containerd[1977]: time="2024-07-02T00:16:23.162157807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:16:23.861127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397196716.mount: Deactivated successfully. Jul 2 00:16:23.876339 containerd[1977]: time="2024-07-02T00:16:23.875883488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.878958 containerd[1977]: time="2024-07-02T00:16:23.878885156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:16:23.880703 containerd[1977]: time="2024-07-02T00:16:23.879356250Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.889952 containerd[1977]: time="2024-07-02T00:16:23.887111252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:23.889952 containerd[1977]: time="2024-07-02T00:16:23.888291100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 726.061742ms" Jul 2 00:16:23.889952 containerd[1977]: time="2024-07-02T00:16:23.888329848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:16:23.948283 containerd[1977]: time="2024-07-02T00:16:23.948222788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:16:24.121502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:16:24.151703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:24.955592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:24.971895 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:16:25.051617 kubelet[2638]: E0702 00:16:25.051572 2638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:16:25.056875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:16:25.057383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:16:25.061311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738374761.mount: Deactivated successfully. Jul 2 00:16:31.563385 containerd[1977]: time="2024-07-02T00:16:31.563305825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:31.578838 containerd[1977]: time="2024-07-02T00:16:31.578762330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:16:31.602793 containerd[1977]: time="2024-07-02T00:16:31.602735888Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:31.646878 containerd[1977]: time="2024-07-02T00:16:31.646721149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:31.648481 containerd[1977]: time="2024-07-02T00:16:31.648254731Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 7.699945724s" Jul 2 00:16:31.648481 containerd[1977]: time="2024-07-02T00:16:31.648307448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:16:33.176193 update_engine[1948]: I0702 00:16:33.175293 1948 update_attempter.cc:509] Updating boot flags... Jul 2 00:16:33.343262 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2766) Jul 2 00:16:33.741094 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2767) Jul 2 00:16:35.117774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:16:35.127686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:35.973754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:16:35.973911 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:16:35.974467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:35.980665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:36.018467 systemd[1]: Reloading requested from client PID 2944 ('systemctl') (unit session-9.scope)... Jul 2 00:16:36.018509 systemd[1]: Reloading... Jul 2 00:16:36.139394 zram_generator::config[2985]: No configuration found. Jul 2 00:16:36.297029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:16:36.406533 systemd[1]: Reloading finished in 387 ms. Jul 2 00:16:36.462624 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:16:36.462754 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:16:36.463155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:36.470187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:37.105658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:37.127490 (kubelet)[3039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:16:37.220407 kubelet[3039]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:16:37.220407 kubelet[3039]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:16:37.220407 kubelet[3039]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:16:37.225812 kubelet[3039]: I0702 00:16:37.225749 3039 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:16:37.595063 kubelet[3039]: I0702 00:16:37.595017 3039 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:16:37.595063 kubelet[3039]: I0702 00:16:37.595050 3039 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:16:37.595651 kubelet[3039]: I0702 00:16:37.595439 3039 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:16:37.657513 kubelet[3039]: I0702 00:16:37.657472 3039 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:16:37.665041 kubelet[3039]: E0702 00:16:37.663914 3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.694008 kubelet[3039]: I0702 00:16:37.693959 3039 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:16:37.703299 kubelet[3039]: I0702 00:16:37.698041 3039 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:16:37.703299 kubelet[3039]: I0702 00:16:37.698108 3039 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:16:37.703299 kubelet[3039]: I0702 00:16:37.698910 3039 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:16:37.703299 kubelet[3039]: I0702 00:16:37.698927 3039 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:16:37.706077 kubelet[3039]: I0702 00:16:37.706042 3039 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:16:37.713624 kubelet[3039]: I0702 00:16:37.713382 3039 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:16:37.713624 kubelet[3039]: I0702 00:16:37.713634 3039 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:16:37.714170 kubelet[3039]: I0702 00:16:37.713675 3039 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:16:37.714170 kubelet[3039]: I0702 00:16:37.713699 3039 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:16:37.728497 kubelet[3039]: W0702 00:16:37.726672 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-160&limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.728497 kubelet[3039]: E0702 00:16:37.726755 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-160&limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.728497 kubelet[3039]: I0702 00:16:37.727158 3039 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:16:37.733180 kubelet[3039]: I0702 00:16:37.731980 3039 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:16:37.733180 kubelet[3039]: W0702 00:16:37.732079 3039 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:16:37.733180 kubelet[3039]: I0702 00:16:37.732852 3039 server.go:1264] "Started kubelet" Jul 2 00:16:37.733180 kubelet[3039]: W0702 00:16:37.733011 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.733180 kubelet[3039]: E0702 00:16:37.733062 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.741793 kubelet[3039]: I0702 00:16:37.741699 3039 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:16:37.743276 kubelet[3039]: I0702 00:16:37.743196 3039 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:16:37.748449 kubelet[3039]: I0702 00:16:37.747261 3039 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:16:37.748449 kubelet[3039]: I0702 00:16:37.747610 3039 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:16:37.748449 kubelet[3039]: I0702 00:16:37.747967 3039 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:16:37.748449 kubelet[3039]: E0702 00:16:37.747805 3039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.160:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-160.17de3d332e60a9b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-160,UID:ip-172-31-23-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-160,},FirstTimestamp:2024-07-02 00:16:37.732821433 +0000 UTC m=+0.598970161,LastTimestamp:2024-07-02 00:16:37.732821433 +0000 UTC m=+0.598970161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-160,}" Jul 2 00:16:37.755948 kubelet[3039]: E0702 00:16:37.755044 3039 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-23-160\" not found" Jul 2 00:16:37.755948 kubelet[3039]: I0702 00:16:37.755101 3039 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:16:37.755948 kubelet[3039]: I0702 00:16:37.755217 3039 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:16:37.755948 kubelet[3039]: I0702 00:16:37.755302 3039 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:16:37.755948 kubelet[3039]: W0702 00:16:37.755815 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.755948 kubelet[3039]: E0702 00:16:37.755869 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.756705 kubelet[3039]: E0702 00:16:37.756427 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": dial tcp 172.31.23.160:6443: connect: connection refused" interval="200ms" Jul 2 00:16:37.760800 kubelet[3039]: I0702 00:16:37.760719 3039 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:16:37.764792 kubelet[3039]: I0702 00:16:37.764213 3039 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:16:37.770271 kubelet[3039]: E0702 00:16:37.770045 3039 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:16:37.771793 kubelet[3039]: I0702 00:16:37.771597 3039 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:16:37.815863 kubelet[3039]: I0702 00:16:37.815795 3039 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:16:37.815863 kubelet[3039]: I0702 00:16:37.815818 3039 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:16:37.833866 kubelet[3039]: I0702 00:16:37.815843 3039 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:16:37.833866 kubelet[3039]: I0702 00:16:37.826847 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:16:37.833866 kubelet[3039]: I0702 00:16:37.829303 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:16:37.833866 kubelet[3039]: I0702 00:16:37.829364 3039 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:16:37.833866 kubelet[3039]: I0702 00:16:37.829386 3039 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:16:37.833866 kubelet[3039]: E0702 00:16:37.829565 3039 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:16:37.833866 kubelet[3039]: W0702 00:16:37.831932 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.833866 kubelet[3039]: E0702 00:16:37.832654 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:37.864590 kubelet[3039]: I0702 00:16:37.857373 3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:37.864590 kubelet[3039]: E0702 00:16:37.857885 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.160:6443/api/v1/nodes\": dial tcp 172.31.23.160:6443: connect: connection refused" node="ip-172-31-23-160" Jul 2 00:16:37.866353 kubelet[3039]: I0702 00:16:37.865921 3039 policy_none.go:49] "None policy: Start" Jul 2 00:16:37.869194 kubelet[3039]: I0702 00:16:37.869149 3039 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:16:37.870163 kubelet[3039]: I0702 00:16:37.869379 3039 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:16:37.884283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:16:37.900671 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:16:37.905285 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:16:37.913326 kubelet[3039]: I0702 00:16:37.913212 3039 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:16:37.916316 kubelet[3039]: I0702 00:16:37.913658 3039 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:16:37.916316 kubelet[3039]: I0702 00:16:37.913811 3039 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:16:37.922203 kubelet[3039]: E0702 00:16:37.922137 3039 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-160\" not found" Jul 2 00:16:37.930428 kubelet[3039]: I0702 00:16:37.929742 3039 topology_manager.go:215] "Topology Admit Handler" podUID="f8e86bc52dd48d50dcc4022a78a8fa35" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.931873 kubelet[3039]: I0702 00:16:37.931843 3039 topology_manager.go:215] "Topology Admit Handler" podUID="56df6cdfb747d0676a12dfcc32c7031f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-160" Jul 2 00:16:37.933692 kubelet[3039]: I0702 00:16:37.933669 3039 topology_manager.go:215] "Topology Admit Handler" podUID="383f83b35ccf0f88f2575f9aa4c41a0c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-160" Jul 2 00:16:37.941150 systemd[1]: Created slice kubepods-burstable-podf8e86bc52dd48d50dcc4022a78a8fa35.slice - libcontainer container kubepods-burstable-podf8e86bc52dd48d50dcc4022a78a8fa35.slice. Jul 2 00:16:37.956196 kubelet[3039]: I0702 00:16:37.955848 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56df6cdfb747d0676a12dfcc32c7031f-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-160\" (UID: \"56df6cdfb747d0676a12dfcc32c7031f\") " pod="kube-system/kube-scheduler-ip-172-31-23-160" Jul 2 00:16:37.956196 kubelet[3039]: I0702 00:16:37.955885 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-ca-certs\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:37.956196 kubelet[3039]: I0702 00:16:37.955913 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:37.956196 kubelet[3039]: I0702 00:16:37.955938 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:37.956196 kubelet[3039]: I0702 00:16:37.955965 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.956966 kubelet[3039]: I0702 00:16:37.955987 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.956966 kubelet[3039]: I0702 00:16:37.956011 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.956966 kubelet[3039]: I0702 00:16:37.956035 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.956966 kubelet[3039]: I0702 00:16:37.956063 3039 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:37.957999 kubelet[3039]: E0702 00:16:37.957934 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": dial tcp 172.31.23.160:6443: connect: connection refused" interval="400ms" Jul 2 00:16:37.959870 systemd[1]: Created slice kubepods-burstable-pod56df6cdfb747d0676a12dfcc32c7031f.slice - libcontainer container kubepods-burstable-pod56df6cdfb747d0676a12dfcc32c7031f.slice. Jul 2 00:16:37.974026 systemd[1]: Created slice kubepods-burstable-pod383f83b35ccf0f88f2575f9aa4c41a0c.slice - libcontainer container kubepods-burstable-pod383f83b35ccf0f88f2575f9aa4c41a0c.slice. Jul 2 00:16:38.061690 kubelet[3039]: I0702 00:16:38.061601 3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:38.063010 kubelet[3039]: E0702 00:16:38.062963 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.160:6443/api/v1/nodes\": dial tcp 172.31.23.160:6443: connect: connection refused" node="ip-172-31-23-160" Jul 2 00:16:38.255829 containerd[1977]: time="2024-07-02T00:16:38.255769457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-160,Uid:f8e86bc52dd48d50dcc4022a78a8fa35,Namespace:kube-system,Attempt:0,}" Jul 2 00:16:38.290591 containerd[1977]: time="2024-07-02T00:16:38.289892985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-160,Uid:56df6cdfb747d0676a12dfcc32c7031f,Namespace:kube-system,Attempt:0,}" Jul 2 00:16:38.295865 containerd[1977]: time="2024-07-02T00:16:38.289894552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-160,Uid:383f83b35ccf0f88f2575f9aa4c41a0c,Namespace:kube-system,Attempt:0,}" Jul 2 00:16:38.359070 kubelet[3039]: E0702 00:16:38.359014 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": dial tcp 172.31.23.160:6443: connect: connection refused" interval="800ms" Jul 2 00:16:38.465199 kubelet[3039]: I0702 00:16:38.465165 3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:38.465611 kubelet[3039]: E0702 00:16:38.465574 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.160:6443/api/v1/nodes\": dial tcp 172.31.23.160:6443: connect: connection refused" node="ip-172-31-23-160" Jul 2 00:16:38.579406 kubelet[3039]: W0702 00:16:38.579170 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:38.579406 kubelet[3039]: E0702 00:16:38.579269 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:38.810438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609442749.mount: Deactivated successfully. Jul 2 00:16:38.828069 containerd[1977]: time="2024-07-02T00:16:38.828014738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:16:38.831569 containerd[1977]: time="2024-07-02T00:16:38.831110427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:16:38.832836 containerd[1977]: time="2024-07-02T00:16:38.832793282Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:16:38.834964 containerd[1977]: time="2024-07-02T00:16:38.834923448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:16:38.837058 containerd[1977]: time="2024-07-02T00:16:38.837006806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:16:38.839612 containerd[1977]: time="2024-07-02T00:16:38.839567632Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:16:38.841266 containerd[1977]: time="2024-07-02T00:16:38.840640414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:16:38.847681 containerd[1977]: time="2024-07-02T00:16:38.844539373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:16:38.850310 containerd[1977]: time="2024-07-02T00:16:38.850256710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.497509ms" Jul 2 00:16:38.860111 containerd[1977]: time="2024-07-02T00:16:38.860058888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.280981ms" Jul 2 00:16:38.861767 containerd[1977]: time="2024-07-02T00:16:38.861708388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.097558ms" Jul 2 00:16:38.924353 kubelet[3039]: W0702 00:16:38.924285 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:38.924353 kubelet[3039]: E0702 00:16:38.924358 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.169267 kubelet[3039]: E0702 00:16:39.162979 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": dial tcp 172.31.23.160:6443: connect: connection refused" interval="1.6s" Jul 2 00:16:39.192766 kubelet[3039]: W0702 00:16:39.192691 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.192766 kubelet[3039]: E0702 00:16:39.192770 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.283420 kubelet[3039]: W0702 00:16:39.281584 3039 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-160&limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.283420 kubelet[3039]: E0702 00:16:39.281662 3039 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-160&limit=500&resourceVersion=0": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.286155 kubelet[3039]: I0702 00:16:39.286096 3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:39.286707 kubelet[3039]: E0702 00:16:39.286676 3039 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.160:6443/api/v1/nodes\": dial tcp 172.31.23.160:6443: connect: connection refused" node="ip-172-31-23-160" Jul 2 00:16:39.355984 containerd[1977]: time="2024-07-02T00:16:39.355671370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:16:39.355984 containerd[1977]: time="2024-07-02T00:16:39.355755846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.355984 containerd[1977]: time="2024-07-02T00:16:39.355785501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:16:39.355984 containerd[1977]: time="2024-07-02T00:16:39.355808288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.363517 containerd[1977]: time="2024-07-02T00:16:39.361509340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:16:39.363517 containerd[1977]: time="2024-07-02T00:16:39.361566886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.363517 containerd[1977]: time="2024-07-02T00:16:39.361587271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:16:39.363517 containerd[1977]: time="2024-07-02T00:16:39.361600812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.371151 containerd[1977]: time="2024-07-02T00:16:39.370638718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:16:39.371151 containerd[1977]: time="2024-07-02T00:16:39.370719732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.371151 containerd[1977]: time="2024-07-02T00:16:39.370748609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:16:39.371151 containerd[1977]: time="2024-07-02T00:16:39.370769685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:16:39.406456 systemd[1]: Started cri-containerd-25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb.scope - libcontainer container 25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb. Jul 2 00:16:39.422682 systemd[1]: Started cri-containerd-a47c4ba42056749b1d245848107833b3b5c01d3686a17406210a7f6ffd6eb92c.scope - libcontainer container a47c4ba42056749b1d245848107833b3b5c01d3686a17406210a7f6ffd6eb92c. Jul 2 00:16:39.430618 systemd[1]: Started cri-containerd-4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304.scope - libcontainer container 4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304. Jul 2 00:16:39.528932 containerd[1977]: time="2024-07-02T00:16:39.528872290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-160,Uid:f8e86bc52dd48d50dcc4022a78a8fa35,Namespace:kube-system,Attempt:0,} returns sandbox id \"25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb\"" Jul 2 00:16:39.542443 containerd[1977]: time="2024-07-02T00:16:39.542393972Z" level=info msg="CreateContainer within sandbox \"25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:16:39.576660 containerd[1977]: time="2024-07-02T00:16:39.576550046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-160,Uid:383f83b35ccf0f88f2575f9aa4c41a0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a47c4ba42056749b1d245848107833b3b5c01d3686a17406210a7f6ffd6eb92c\"" Jul 2 00:16:39.581069 containerd[1977]: time="2024-07-02T00:16:39.580896142Z" level=info msg="CreateContainer within sandbox \"a47c4ba42056749b1d245848107833b3b5c01d3686a17406210a7f6ffd6eb92c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:16:39.591184 containerd[1977]: time="2024-07-02T00:16:39.590342641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-160,Uid:56df6cdfb747d0676a12dfcc32c7031f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304\"" Jul 2 00:16:39.594866 containerd[1977]: time="2024-07-02T00:16:39.594800426Z" level=info msg="CreateContainer within sandbox \"4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:16:39.654721 containerd[1977]: time="2024-07-02T00:16:39.654488131Z" level=info msg="CreateContainer within sandbox \"25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8\"" Jul 2 00:16:39.660485 containerd[1977]: time="2024-07-02T00:16:39.660435792Z" level=info msg="StartContainer for \"fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8\"" Jul 2 00:16:39.661902 containerd[1977]: time="2024-07-02T00:16:39.660943568Z" level=info msg="CreateContainer within sandbox \"a47c4ba42056749b1d245848107833b3b5c01d3686a17406210a7f6ffd6eb92c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d55c0d3ed9a44c14a6a49c31d494f71b540bad1aed11414dd274c421c7683366\"" Jul 2 00:16:39.668752 containerd[1977]: time="2024-07-02T00:16:39.668698607Z" level=info msg="StartContainer for \"d55c0d3ed9a44c14a6a49c31d494f71b540bad1aed11414dd274c421c7683366\"" Jul 2 00:16:39.684993 containerd[1977]: time="2024-07-02T00:16:39.684742239Z" level=info msg="CreateContainer within sandbox \"4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b\"" Jul 2 00:16:39.686709 containerd[1977]: time="2024-07-02T00:16:39.686662574Z" level=info msg="StartContainer for \"f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b\"" Jul 2 00:16:39.717831 kubelet[3039]: E0702 00:16:39.717710 3039 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.160:6443: connect: connection refused Jul 2 00:16:39.731879 systemd[1]: Started cri-containerd-d55c0d3ed9a44c14a6a49c31d494f71b540bad1aed11414dd274c421c7683366.scope - libcontainer container d55c0d3ed9a44c14a6a49c31d494f71b540bad1aed11414dd274c421c7683366. Jul 2 00:16:39.740779 systemd[1]: Started cri-containerd-fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8.scope - libcontainer container fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8. Jul 2 00:16:39.761138 systemd[1]: Started cri-containerd-f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b.scope - libcontainer container f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b. Jul 2 00:16:39.923513 containerd[1977]: time="2024-07-02T00:16:39.923194487Z" level=info msg="StartContainer for \"d55c0d3ed9a44c14a6a49c31d494f71b540bad1aed11414dd274c421c7683366\" returns successfully" Jul 2 00:16:39.923919 containerd[1977]: time="2024-07-02T00:16:39.923517276Z" level=info msg="StartContainer for \"fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8\" returns successfully" Jul 2 00:16:39.935763 containerd[1977]: time="2024-07-02T00:16:39.935517618Z" level=info msg="StartContainer for \"f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b\" returns successfully" Jul 2 00:16:40.892825 kubelet[3039]: I0702 00:16:40.892793 3039 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:43.486504 kubelet[3039]: I0702 00:16:43.484822 3039 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-160" Jul 2 00:16:43.550983 kubelet[3039]: E0702 00:16:43.550663 3039 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-160.17de3d332e60a9b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-160,UID:ip-172-31-23-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-160,},FirstTimestamp:2024-07-02 00:16:37.732821433 +0000 UTC m=+0.598970161,LastTimestamp:2024-07-02 00:16:37.732821433 +0000 UTC m=+0.598970161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-160,}" Jul 2 00:16:43.567898 kubelet[3039]: E0702 00:16:43.567825 3039 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 00:16:43.730414 kubelet[3039]: I0702 00:16:43.730372 3039 apiserver.go:52] "Watching apiserver" Jul 2 00:16:43.757135 kubelet[3039]: I0702 00:16:43.756363 3039 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:16:45.611397 systemd[1]: Reloading requested from client PID 3312 ('systemctl') (unit session-9.scope)... Jul 2 00:16:45.611417 systemd[1]: Reloading... Jul 2 00:16:45.798265 zram_generator::config[3350]: No configuration found. Jul 2 00:16:46.007773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:16:46.140497 systemd[1]: Reloading finished in 528 ms. Jul 2 00:16:46.194420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:46.207843 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:16:46.208155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:46.218639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:46.987623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:47.004813 (kubelet)[3407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:16:47.151149 kubelet[3407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:16:47.151149 kubelet[3407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:16:47.151149 kubelet[3407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:16:47.151835 kubelet[3407]: I0702 00:16:47.151788 3407 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:16:47.160290 kubelet[3407]: I0702 00:16:47.159031 3407 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:16:47.160290 kubelet[3407]: I0702 00:16:47.159065 3407 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:16:47.160290 kubelet[3407]: I0702 00:16:47.159346 3407 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:16:47.161519 kubelet[3407]: I0702 00:16:47.161495 3407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:16:47.164332 kubelet[3407]: I0702 00:16:47.164289 3407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:16:47.172538 kubelet[3407]: I0702 00:16:47.172428 3407 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:16:47.172976 kubelet[3407]: I0702 00:16:47.172737 3407 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:16:47.173052 kubelet[3407]: I0702 00:16:47.172774 3407 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:16:47.173052 kubelet[3407]: I0702 00:16:47.173007 3407 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:16:47.173052 kubelet[3407]: I0702 00:16:47.173023 3407 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:16:47.173284 kubelet[3407]: I0702 00:16:47.173076 3407 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:16:47.174017 kubelet[3407]: I0702 00:16:47.173985 3407 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:16:47.174319 kubelet[3407]: I0702 00:16:47.174271 3407 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:16:47.176007 kubelet[3407]: I0702 00:16:47.174407 3407 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:16:47.176007 kubelet[3407]: I0702 00:16:47.174435 3407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:16:47.188000 kubelet[3407]: I0702 00:16:47.187973 3407 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:16:47.188521 kubelet[3407]: I0702 00:16:47.188500 3407 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:16:47.189933 kubelet[3407]: I0702 00:16:47.189516 3407 server.go:1264] "Started kubelet" Jul 2 00:16:47.200793 kubelet[3407]: I0702 00:16:47.200759 3407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:16:47.213388 kubelet[3407]: I0702 00:16:47.210564 3407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:16:47.220146 kubelet[3407]: I0702 00:16:47.220067 3407 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:16:47.226259 kubelet[3407]: I0702 00:16:47.220529 3407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:16:47.248849 kubelet[3407]: I0702 00:16:47.247934 3407 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:16:47.248849 kubelet[3407]: I0702 00:16:47.229152 3407 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:16:47.248849 kubelet[3407]: I0702 00:16:47.229088 3407 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:16:47.248849 kubelet[3407]: I0702 00:16:47.248354 3407 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:16:47.261925 kubelet[3407]: I0702 00:16:47.261037 3407 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:16:47.261925 kubelet[3407]: I0702 00:16:47.261061 3407 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:16:47.261925 kubelet[3407]: I0702 00:16:47.261150 3407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:16:47.263640 kubelet[3407]: I0702 00:16:47.263602 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:16:47.266276 kubelet[3407]: I0702 00:16:47.266220 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:16:47.266427 kubelet[3407]: I0702 00:16:47.266414 3407 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:16:47.266537 kubelet[3407]: I0702 00:16:47.266527 3407 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:16:47.267303 kubelet[3407]: E0702 00:16:47.266641 3407 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:16:47.301301 kubelet[3407]: E0702 00:16:47.300106 3407 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:16:47.352816 kubelet[3407]: I0702 00:16:47.352354 3407 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-160" Jul 2 00:16:47.367907 kubelet[3407]: E0702 00:16:47.367877 3407 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:16:47.389268 kubelet[3407]: I0702 00:16:47.389217 3407 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-160" Jul 2 00:16:47.392572 kubelet[3407]: I0702 00:16:47.392302 3407 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-160" Jul 2 00:16:47.434430 kubelet[3407]: I0702 00:16:47.434382 3407 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:16:47.434430 kubelet[3407]: I0702 00:16:47.434411 3407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:16:47.434430 kubelet[3407]: I0702 00:16:47.434437 3407 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:16:47.434690 kubelet[3407]: I0702 00:16:47.434633 3407 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:16:47.434690 kubelet[3407]: I0702 00:16:47.434647 3407 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:16:47.434690 kubelet[3407]: I0702 00:16:47.434673 3407 policy_none.go:49] "None policy: Start" Jul 2 00:16:47.436669 kubelet[3407]: I0702 00:16:47.436640 3407 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:16:47.436669 kubelet[3407]: I0702 00:16:47.436673 3407 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:16:47.436958 kubelet[3407]: I0702 00:16:47.436938 3407 state_mem.go:75] "Updated machine memory state" Jul 2 00:16:47.453492 kubelet[3407]: I0702 00:16:47.453464 3407 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:16:47.455282 kubelet[3407]: I0702 00:16:47.453677 3407 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:16:47.461940 kubelet[3407]: I0702 00:16:47.460886 3407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:16:47.568512 kubelet[3407]: I0702 00:16:47.568324 3407 topology_manager.go:215] "Topology Admit Handler" podUID="f8e86bc52dd48d50dcc4022a78a8fa35" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:47.575873 kubelet[3407]: I0702 00:16:47.569007 3407 topology_manager.go:215] "Topology Admit Handler" podUID="56df6cdfb747d0676a12dfcc32c7031f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-160" Jul 2 00:16:47.577518 kubelet[3407]: I0702 00:16:47.569092 3407 topology_manager.go:215] "Topology Admit Handler" podUID="383f83b35ccf0f88f2575f9aa4c41a0c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-160" Jul 2 00:16:47.596990 kubelet[3407]: E0702 00:16:47.596946 3407 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-160\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:47.657364 kubelet[3407]: I0702 00:16:47.657043 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-ca-certs\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:47.657364 kubelet[3407]: I0702 00:16:47.657100 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:47.657364 kubelet[3407]: I0702 00:16:47.657158 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:47.657364 kubelet[3407]: I0702 00:16:47.657327 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56df6cdfb747d0676a12dfcc32c7031f-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-160\" (UID: \"56df6cdfb747d0676a12dfcc32c7031f\") " pod="kube-system/kube-scheduler-ip-172-31-23-160" Jul 2 00:16:47.657364 kubelet[3407]: I0702 00:16:47.657362 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383f83b35ccf0f88f2575f9aa4c41a0c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-160\" (UID: \"383f83b35ccf0f88f2575f9aa4c41a0c\") " pod="kube-system/kube-apiserver-ip-172-31-23-160" Jul 2 00:16:47.657932 kubelet[3407]: I0702 00:16:47.657387 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:47.657932 kubelet[3407]: I0702 00:16:47.657411 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:47.657932 kubelet[3407]: I0702 00:16:47.657435 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:47.657932 kubelet[3407]: I0702 00:16:47.657461 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8e86bc52dd48d50dcc4022a78a8fa35-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-160\" (UID: \"f8e86bc52dd48d50dcc4022a78a8fa35\") " pod="kube-system/kube-controller-manager-ip-172-31-23-160" Jul 2 00:16:48.177545 kubelet[3407]: I0702 00:16:48.177505 3407 apiserver.go:52] "Watching apiserver" Jul 2 00:16:48.249612 kubelet[3407]: I0702 00:16:48.248788 3407 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:16:48.327976 kubelet[3407]: I0702 00:16:48.327885 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-160" podStartSLOduration=4.327860347 podStartE2EDuration="4.327860347s" podCreationTimestamp="2024-07-02 00:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:16:48.298274387 +0000 UTC m=+1.266660419" watchObservedRunningTime="2024-07-02 00:16:48.327860347 +0000 UTC m=+1.296246356" Jul 2 00:16:48.356552 kubelet[3407]: I0702 00:16:48.356079 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-160" podStartSLOduration=1.356059596 podStartE2EDuration="1.356059596s" podCreationTimestamp="2024-07-02 00:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:16:48.355571657 +0000 UTC m=+1.323957675" watchObservedRunningTime="2024-07-02 00:16:48.356059596 +0000 UTC m=+1.324445615" Jul 2 00:16:48.356552 kubelet[3407]: I0702 00:16:48.356341 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-160" podStartSLOduration=1.356332321 podStartE2EDuration="1.356332321s" podCreationTimestamp="2024-07-02 00:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:16:48.332685347 +0000 UTC m=+1.301071366" watchObservedRunningTime="2024-07-02 00:16:48.356332321 +0000 UTC m=+1.324718341" Jul 2 00:16:53.926502 sudo[2321]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:53.950579 sshd[2318]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:53.956182 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:16:53.956581 systemd[1]: sshd@8-172.31.23.160:22-147.75.109.163:44366.service: Deactivated successfully. Jul 2 00:16:53.961053 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:16:53.962394 systemd[1]: session-9.scope: Consumed 5.345s CPU time, 136.0M memory peak, 0B memory swap peak. Jul 2 00:16:53.964045 systemd-logind[1947]: Removed session 9. Jul 2 00:17:00.679886 kubelet[3407]: I0702 00:17:00.679845 3407 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:17:00.681055 containerd[1977]: time="2024-07-02T00:17:00.681014114Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:17:00.681524 kubelet[3407]: I0702 00:17:00.681501 3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:17:01.479411 kubelet[3407]: I0702 00:17:01.479352 3407 topology_manager.go:215] "Topology Admit Handler" podUID="31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b" podNamespace="kube-system" podName="kube-proxy-pbmhd" Jul 2 00:17:01.510010 systemd[1]: Created slice kubepods-besteffort-pod31e7ad4c_a94d_45f7_b05f_2a1c5f0d419b.slice - libcontainer container kubepods-besteffort-pod31e7ad4c_a94d_45f7_b05f_2a1c5f0d419b.slice. Jul 2 00:17:01.594955 kubelet[3407]: I0702 00:17:01.594902 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b-kube-proxy\") pod \"kube-proxy-pbmhd\" (UID: \"31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b\") " pod="kube-system/kube-proxy-pbmhd" Jul 2 00:17:01.595135 kubelet[3407]: I0702 00:17:01.594965 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b-xtables-lock\") pod \"kube-proxy-pbmhd\" (UID: \"31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b\") " pod="kube-system/kube-proxy-pbmhd" Jul 2 00:17:01.595135 kubelet[3407]: I0702 00:17:01.594991 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbcv\" (UniqueName: \"kubernetes.io/projected/31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b-kube-api-access-2tbcv\") pod \"kube-proxy-pbmhd\" (UID: \"31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b\") " pod="kube-system/kube-proxy-pbmhd" Jul 2 00:17:01.595135 kubelet[3407]: I0702 00:17:01.595018 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b-lib-modules\") pod \"kube-proxy-pbmhd\" (UID: \"31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b\") " pod="kube-system/kube-proxy-pbmhd" Jul 2 00:17:01.855504 containerd[1977]: time="2024-07-02T00:17:01.842925495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbmhd,Uid:31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:01.968448 containerd[1977]: time="2024-07-02T00:17:01.967378484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:01.968448 containerd[1977]: time="2024-07-02T00:17:01.967508577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:01.968448 containerd[1977]: time="2024-07-02T00:17:01.967558177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:01.968448 containerd[1977]: time="2024-07-02T00:17:01.967611892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:02.021870 systemd[1]: run-containerd-runc-k8s.io-05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859-runc.0WnOHo.mount: Deactivated successfully. Jul 2 00:17:02.035148 systemd[1]: Started cri-containerd-05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859.scope - libcontainer container 05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859. Jul 2 00:17:02.092297 kubelet[3407]: I0702 00:17:02.092258 3407 topology_manager.go:215] "Topology Admit Handler" podUID="e7e72ea8-51a5-4218-ae62-53b93cadf07b" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-zt2zh" Jul 2 00:17:02.119354 systemd[1]: Created slice kubepods-besteffort-pode7e72ea8_51a5_4218_ae62_53b93cadf07b.slice - libcontainer container kubepods-besteffort-pode7e72ea8_51a5_4218_ae62_53b93cadf07b.slice. Jul 2 00:17:02.152434 containerd[1977]: time="2024-07-02T00:17:02.152361938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbmhd,Uid:31e7ad4c-a94d-45f7-b05f-2a1c5f0d419b,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859\"" Jul 2 00:17:02.163824 containerd[1977]: time="2024-07-02T00:17:02.163628454Z" level=info msg="CreateContainer within sandbox \"05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:17:02.192788 containerd[1977]: time="2024-07-02T00:17:02.192736984Z" level=info msg="CreateContainer within sandbox \"05e2a6bd47e63b362f859d18f73df319141179911aa37b7a69ad01591f386859\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f328f9898286336c20e9b6e627d1b8e8d8fbf4d1dd5bbbfa34e17a18f7681bf\"" Jul 2 00:17:02.194731 containerd[1977]: time="2024-07-02T00:17:02.193951487Z" level=info msg="StartContainer for \"5f328f9898286336c20e9b6e627d1b8e8d8fbf4d1dd5bbbfa34e17a18f7681bf\"" Jul 2 00:17:02.202251 kubelet[3407]: I0702 00:17:02.199498 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2g7\" (UniqueName: \"kubernetes.io/projected/e7e72ea8-51a5-4218-ae62-53b93cadf07b-kube-api-access-kl2g7\") pod \"tigera-operator-76ff79f7fd-zt2zh\" (UID: \"e7e72ea8-51a5-4218-ae62-53b93cadf07b\") " pod="tigera-operator/tigera-operator-76ff79f7fd-zt2zh" Jul 2 00:17:02.202251 kubelet[3407]: I0702 00:17:02.199559 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7e72ea8-51a5-4218-ae62-53b93cadf07b-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-zt2zh\" (UID: \"e7e72ea8-51a5-4218-ae62-53b93cadf07b\") " pod="tigera-operator/tigera-operator-76ff79f7fd-zt2zh" Jul 2 00:17:02.242470 systemd[1]: Started cri-containerd-5f328f9898286336c20e9b6e627d1b8e8d8fbf4d1dd5bbbfa34e17a18f7681bf.scope - libcontainer container 5f328f9898286336c20e9b6e627d1b8e8d8fbf4d1dd5bbbfa34e17a18f7681bf. Jul 2 00:17:02.368201 containerd[1977]: time="2024-07-02T00:17:02.368141337Z" level=info msg="StartContainer for \"5f328f9898286336c20e9b6e627d1b8e8d8fbf4d1dd5bbbfa34e17a18f7681bf\" returns successfully" Jul 2 00:17:02.410850 kubelet[3407]: I0702 00:17:02.410676 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbmhd" podStartSLOduration=1.410654958 podStartE2EDuration="1.410654958s" podCreationTimestamp="2024-07-02 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:02.409198653 +0000 UTC m=+15.377584672" watchObservedRunningTime="2024-07-02 00:17:02.410654958 +0000 UTC m=+15.379040977" Jul 2 00:17:02.424796 containerd[1977]: time="2024-07-02T00:17:02.424738993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-zt2zh,Uid:e7e72ea8-51a5-4218-ae62-53b93cadf07b,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:17:02.460657 containerd[1977]: time="2024-07-02T00:17:02.460537094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:02.460999 containerd[1977]: time="2024-07-02T00:17:02.460696963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:02.461201 containerd[1977]: time="2024-07-02T00:17:02.461148247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:02.462068 containerd[1977]: time="2024-07-02T00:17:02.462008611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:02.489938 systemd[1]: Started cri-containerd-de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916.scope - libcontainer container de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916. Jul 2 00:17:02.590640 containerd[1977]: time="2024-07-02T00:17:02.590593022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-zt2zh,Uid:e7e72ea8-51a5-4218-ae62-53b93cadf07b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916\"" Jul 2 00:17:02.595947 containerd[1977]: time="2024-07-02T00:17:02.595900648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:17:03.969702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616308210.mount: Deactivated successfully. Jul 2 00:17:05.191824 containerd[1977]: time="2024-07-02T00:17:05.191773474Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:05.193419 containerd[1977]: time="2024-07-02T00:17:05.193217559Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076096" Jul 2 00:17:05.195259 containerd[1977]: time="2024-07-02T00:17:05.195164072Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:05.199585 containerd[1977]: time="2024-07-02T00:17:05.198331552Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:05.199585 containerd[1977]: time="2024-07-02T00:17:05.199342322Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.602985321s" Jul 2 00:17:05.199585 containerd[1977]: time="2024-07-02T00:17:05.199383565Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:17:05.243452 containerd[1977]: time="2024-07-02T00:17:05.243406482Z" level=info msg="CreateContainer within sandbox \"de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:17:05.266046 containerd[1977]: time="2024-07-02T00:17:05.265939335Z" level=info msg="CreateContainer within sandbox \"de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b\"" Jul 2 00:17:05.270598 containerd[1977]: time="2024-07-02T00:17:05.266643628Z" level=info msg="StartContainer for \"bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b\"" Jul 2 00:17:05.346958 systemd[1]: run-containerd-runc-k8s.io-bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b-runc.9P0ODJ.mount: Deactivated successfully. Jul 2 00:17:05.358796 systemd[1]: Started cri-containerd-bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b.scope - libcontainer container bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b. Jul 2 00:17:05.403756 containerd[1977]: time="2024-07-02T00:17:05.403712879Z" level=info msg="StartContainer for \"bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b\" returns successfully" Jul 2 00:17:07.315142 kubelet[3407]: I0702 00:17:07.314550 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-zt2zh" podStartSLOduration=3.695721194 podStartE2EDuration="6.314417985s" podCreationTimestamp="2024-07-02 00:17:01 +0000 UTC" firstStartedPulling="2024-07-02 00:17:02.593191722 +0000 UTC m=+15.561577731" lastFinishedPulling="2024-07-02 00:17:05.211888514 +0000 UTC m=+18.180274522" observedRunningTime="2024-07-02 00:17:05.474708763 +0000 UTC m=+18.443094781" watchObservedRunningTime="2024-07-02 00:17:07.314417985 +0000 UTC m=+20.282804004" Jul 2 00:17:09.153182 kubelet[3407]: I0702 00:17:09.153123 3407 topology_manager.go:215] "Topology Admit Handler" podUID="8ef50682-3671-4aa5-aedd-7bc28b6b70ab" podNamespace="calico-system" podName="calico-typha-7756b56cd7-xm7dz" Jul 2 00:17:09.164771 systemd[1]: Created slice kubepods-besteffort-pod8ef50682_3671_4aa5_aedd_7bc28b6b70ab.slice - libcontainer container kubepods-besteffort-pod8ef50682_3671_4aa5_aedd_7bc28b6b70ab.slice. Jul 2 00:17:09.315260 kubelet[3407]: I0702 00:17:09.305327 3407 topology_manager.go:215] "Topology Admit Handler" podUID="2757796f-4fda-460f-8bff-028dc203c1fe" podNamespace="calico-system" podName="calico-node-x5twf" Jul 2 00:17:09.315721 kubelet[3407]: I0702 00:17:09.315635 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6tz\" (UniqueName: \"kubernetes.io/projected/8ef50682-3671-4aa5-aedd-7bc28b6b70ab-kube-api-access-4p6tz\") pod \"calico-typha-7756b56cd7-xm7dz\" (UID: \"8ef50682-3671-4aa5-aedd-7bc28b6b70ab\") " pod="calico-system/calico-typha-7756b56cd7-xm7dz" Jul 2 00:17:09.315883 kubelet[3407]: I0702 00:17:09.315863 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ef50682-3671-4aa5-aedd-7bc28b6b70ab-tigera-ca-bundle\") pod \"calico-typha-7756b56cd7-xm7dz\" (UID: \"8ef50682-3671-4aa5-aedd-7bc28b6b70ab\") " pod="calico-system/calico-typha-7756b56cd7-xm7dz" Jul 2 00:17:09.315986 kubelet[3407]: I0702 00:17:09.315970 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8ef50682-3671-4aa5-aedd-7bc28b6b70ab-typha-certs\") pod \"calico-typha-7756b56cd7-xm7dz\" (UID: \"8ef50682-3671-4aa5-aedd-7bc28b6b70ab\") " pod="calico-system/calico-typha-7756b56cd7-xm7dz" Jul 2 00:17:09.324830 kubelet[3407]: W0702 00:17:09.324782 3407 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-23-160" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-23-160' and this object Jul 2 00:17:09.324830 kubelet[3407]: E0702 00:17:09.324830 3407 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-23-160" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-23-160' and this object Jul 2 00:17:09.325423 kubelet[3407]: W0702 00:17:09.325390 3407 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-23-160" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-23-160' and this object Jul 2 00:17:09.325423 kubelet[3407]: E0702 00:17:09.325423 3407 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-23-160" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-23-160' and this object Jul 2 00:17:09.332816 systemd[1]: Created slice kubepods-besteffort-pod2757796f_4fda_460f_8bff_028dc203c1fe.slice - libcontainer container kubepods-besteffort-pod2757796f_4fda_460f_8bff_028dc203c1fe.slice. Jul 2 00:17:09.419606 kubelet[3407]: I0702 00:17:09.419151 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-var-lib-calico\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.424169 kubelet[3407]: I0702 00:17:09.424119 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-flexvol-driver-host\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425414 kubelet[3407]: I0702 00:17:09.424211 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bn5\" (UniqueName: \"kubernetes.io/projected/2757796f-4fda-460f-8bff-028dc203c1fe-kube-api-access-d4bn5\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425414 kubelet[3407]: I0702 00:17:09.424255 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2757796f-4fda-460f-8bff-028dc203c1fe-node-certs\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425414 kubelet[3407]: I0702 00:17:09.424284 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-lib-modules\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425414 kubelet[3407]: I0702 00:17:09.424516 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-cni-bin-dir\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425414 kubelet[3407]: I0702 00:17:09.424574 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2757796f-4fda-460f-8bff-028dc203c1fe-tigera-ca-bundle\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425713 kubelet[3407]: I0702 00:17:09.424602 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-cni-net-dir\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425713 kubelet[3407]: I0702 00:17:09.424634 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-xtables-lock\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425713 kubelet[3407]: I0702 00:17:09.424660 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-policysync\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425713 kubelet[3407]: I0702 00:17:09.424690 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-var-run-calico\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.425713 kubelet[3407]: I0702 00:17:09.424715 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2757796f-4fda-460f-8bff-028dc203c1fe-cni-log-dir\") pod \"calico-node-x5twf\" (UID: \"2757796f-4fda-460f-8bff-028dc203c1fe\") " pod="calico-system/calico-node-x5twf" Jul 2 00:17:09.475893 containerd[1977]: time="2024-07-02T00:17:09.473809719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7756b56cd7-xm7dz,Uid:8ef50682-3671-4aa5-aedd-7bc28b6b70ab,Namespace:calico-system,Attempt:0,}" Jul 2 00:17:09.476420 kubelet[3407]: I0702 00:17:09.475254 3407 topology_manager.go:215] "Topology Admit Handler" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" podNamespace="calico-system" podName="csi-node-driver-rjx52" Jul 2 00:17:09.476420 kubelet[3407]: E0702 00:17:09.475628 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:09.535401 kubelet[3407]: E0702 00:17:09.535365 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.535401 kubelet[3407]: W0702 00:17:09.535397 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.535617 kubelet[3407]: E0702 00:17:09.535431 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.551983 kubelet[3407]: E0702 00:17:09.551865 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.551983 kubelet[3407]: W0702 00:17:09.551893 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.551983 kubelet[3407]: E0702 00:17:09.551917 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.552830 kubelet[3407]: E0702 00:17:09.552699 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.552830 kubelet[3407]: W0702 00:17:09.552718 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.552830 kubelet[3407]: E0702 00:17:09.552738 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.554320 kubelet[3407]: E0702 00:17:09.553984 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.554320 kubelet[3407]: W0702 00:17:09.554001 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.554320 kubelet[3407]: E0702 00:17:09.554025 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.556385 kubelet[3407]: E0702 00:17:09.556359 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.556385 kubelet[3407]: W0702 00:17:09.556383 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.556509 kubelet[3407]: E0702 00:17:09.556402 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.556834 kubelet[3407]: E0702 00:17:09.556719 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.556834 kubelet[3407]: W0702 00:17:09.556732 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.556834 kubelet[3407]: E0702 00:17:09.556746 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.557738 kubelet[3407]: E0702 00:17:09.557132 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.557738 kubelet[3407]: W0702 00:17:09.557145 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.557738 kubelet[3407]: E0702 00:17:09.557158 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.558176 kubelet[3407]: E0702 00:17:09.558058 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.558176 kubelet[3407]: W0702 00:17:09.558073 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.558176 kubelet[3407]: E0702 00:17:09.558087 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.558960 kubelet[3407]: E0702 00:17:09.558696 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.558960 kubelet[3407]: W0702 00:17:09.558711 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.558960 kubelet[3407]: E0702 00:17:09.558725 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.559729 kubelet[3407]: E0702 00:17:09.559641 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.559729 kubelet[3407]: W0702 00:17:09.559658 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.559729 kubelet[3407]: E0702 00:17:09.559672 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.564263 kubelet[3407]: E0702 00:17:09.564078 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.564263 kubelet[3407]: W0702 00:17:09.564102 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.564263 kubelet[3407]: E0702 00:17:09.564126 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.569021 kubelet[3407]: E0702 00:17:09.568758 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.569021 kubelet[3407]: W0702 00:17:09.568787 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.569021 kubelet[3407]: E0702 00:17:09.568815 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.569537 kubelet[3407]: E0702 00:17:09.569493 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.569537 kubelet[3407]: W0702 00:17:09.569511 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.570977 kubelet[3407]: E0702 00:17:09.569724 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.572404 kubelet[3407]: E0702 00:17:09.572204 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.572404 kubelet[3407]: W0702 00:17:09.572245 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.572892 kubelet[3407]: E0702 00:17:09.572584 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.573520 kubelet[3407]: E0702 00:17:09.573101 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.573520 kubelet[3407]: W0702 00:17:09.573120 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.573520 kubelet[3407]: E0702 00:17:09.573136 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.573698 kubelet[3407]: E0702 00:17:09.573553 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.573698 kubelet[3407]: W0702 00:17:09.573584 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.573698 kubelet[3407]: E0702 00:17:09.573600 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.574550 kubelet[3407]: E0702 00:17:09.573815 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.574550 kubelet[3407]: W0702 00:17:09.573825 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.574550 kubelet[3407]: E0702 00:17:09.573837 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.574550 kubelet[3407]: E0702 00:17:09.574428 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.574550 kubelet[3407]: W0702 00:17:09.574439 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.574550 kubelet[3407]: E0702 00:17:09.574452 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.574702 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.575959 kubelet[3407]: W0702 00:17:09.574712 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.574725 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.574931 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.575959 kubelet[3407]: W0702 00:17:09.574940 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.574951 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.575158 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.575959 kubelet[3407]: W0702 00:17:09.575168 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.575180 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.575959 kubelet[3407]: E0702 00:17:09.575862 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.576460 kubelet[3407]: W0702 00:17:09.575873 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.576460 kubelet[3407]: E0702 00:17:09.575887 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.623802 containerd[1977]: time="2024-07-02T00:17:09.623410685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:09.623802 containerd[1977]: time="2024-07-02T00:17:09.623481498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:09.623802 containerd[1977]: time="2024-07-02T00:17:09.623505652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:09.623802 containerd[1977]: time="2024-07-02T00:17:09.623525107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:09.631739 kubelet[3407]: E0702 00:17:09.630365 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.631739 kubelet[3407]: W0702 00:17:09.630395 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.631739 kubelet[3407]: E0702 00:17:09.630421 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.631739 kubelet[3407]: I0702 00:17:09.630465 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b7f5e58-3c44-472f-85f0-c915eb069355-socket-dir\") pod \"csi-node-driver-rjx52\" (UID: \"9b7f5e58-3c44-472f-85f0-c915eb069355\") " pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:09.631739 kubelet[3407]: E0702 00:17:09.630803 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.631739 kubelet[3407]: W0702 00:17:09.630816 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.632537 kubelet[3407]: E0702 00:17:09.632499 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.632634 kubelet[3407]: I0702 00:17:09.632551 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fw6\" (UniqueName: \"kubernetes.io/projected/9b7f5e58-3c44-472f-85f0-c915eb069355-kube-api-access-m2fw6\") pod \"csi-node-driver-rjx52\" (UID: \"9b7f5e58-3c44-472f-85f0-c915eb069355\") " pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:09.635336 kubelet[3407]: E0702 00:17:09.635294 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.635336 kubelet[3407]: W0702 00:17:09.635318 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.635495 kubelet[3407]: E0702 00:17:09.635367 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.636408 kubelet[3407]: E0702 00:17:09.636343 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.636408 kubelet[3407]: W0702 00:17:09.636366 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.636643 kubelet[3407]: E0702 00:17:09.636574 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.636978 kubelet[3407]: E0702 00:17:09.636763 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.636978 kubelet[3407]: W0702 00:17:09.636775 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.638209 kubelet[3407]: E0702 00:17:09.638183 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.639247 kubelet[3407]: I0702 00:17:09.638374 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b7f5e58-3c44-472f-85f0-c915eb069355-registration-dir\") pod \"csi-node-driver-rjx52\" (UID: \"9b7f5e58-3c44-472f-85f0-c915eb069355\") " pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:09.639247 kubelet[3407]: E0702 00:17:09.638491 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.639247 kubelet[3407]: W0702 00:17:09.638501 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.639247 kubelet[3407]: E0702 00:17:09.638530 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.639247 kubelet[3407]: E0702 00:17:09.638767 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.639247 kubelet[3407]: W0702 00:17:09.638776 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.639247 kubelet[3407]: E0702 00:17:09.638801 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.639247 kubelet[3407]: E0702 00:17:09.639067 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.639247 kubelet[3407]: W0702 00:17:09.639077 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.639661 kubelet[3407]: E0702 00:17:09.639103 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.639661 kubelet[3407]: E0702 00:17:09.639386 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.639661 kubelet[3407]: W0702 00:17:09.639399 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.639661 kubelet[3407]: E0702 00:17:09.639426 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.639661 kubelet[3407]: I0702 00:17:09.639452 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7f5e58-3c44-472f-85f0-c915eb069355-kubelet-dir\") pod \"csi-node-driver-rjx52\" (UID: \"9b7f5e58-3c44-472f-85f0-c915eb069355\") " pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:09.641449 kubelet[3407]: E0702 00:17:09.640305 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.641449 kubelet[3407]: W0702 00:17:09.640326 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.641449 kubelet[3407]: E0702 00:17:09.640345 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.641449 kubelet[3407]: E0702 00:17:09.640604 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.641449 kubelet[3407]: W0702 00:17:09.640613 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.641449 kubelet[3407]: E0702 00:17:09.640639 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.641449 kubelet[3407]: E0702 00:17:09.640920 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.641449 kubelet[3407]: W0702 00:17:09.640930 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.642264 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.643996 kubelet[3407]: I0702 00:17:09.642308 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b7f5e58-3c44-472f-85f0-c915eb069355-varrun\") pod \"csi-node-driver-rjx52\" (UID: \"9b7f5e58-3c44-472f-85f0-c915eb069355\") " pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.642541 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.643996 kubelet[3407]: W0702 00:17:09.642554 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.642572 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.642784 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.643996 kubelet[3407]: W0702 00:17:09.642792 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.642832 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.643996 kubelet[3407]: E0702 00:17:09.643063 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.644759 kubelet[3407]: W0702 00:17:09.643072 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.644759 kubelet[3407]: E0702 00:17:09.643083 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.644759 kubelet[3407]: E0702 00:17:09.643308 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.644759 kubelet[3407]: W0702 00:17:09.643316 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.644759 kubelet[3407]: E0702 00:17:09.643355 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.682496 systemd[1]: Started cri-containerd-74c8a0198a99ebfd2a77904ad678e1e0ded8da5415117eb21ad7ca3388d0417f.scope - libcontainer container 74c8a0198a99ebfd2a77904ad678e1e0ded8da5415117eb21ad7ca3388d0417f. Jul 2 00:17:09.743895 kubelet[3407]: E0702 00:17:09.743859 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.743895 kubelet[3407]: W0702 00:17:09.743891 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.744367 kubelet[3407]: E0702 00:17:09.743914 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.744588 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746273 kubelet[3407]: W0702 00:17:09.744607 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.744705 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.744991 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746273 kubelet[3407]: W0702 00:17:09.745001 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.745017 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.745263 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746273 kubelet[3407]: W0702 00:17:09.745272 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.745295 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746273 kubelet[3407]: E0702 00:17:09.745508 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746813 kubelet[3407]: W0702 00:17:09.745516 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746813 kubelet[3407]: E0702 00:17:09.745542 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746813 kubelet[3407]: E0702 00:17:09.746479 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746813 kubelet[3407]: W0702 00:17:09.746492 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746813 kubelet[3407]: E0702 00:17:09.746507 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.746813 kubelet[3407]: E0702 00:17:09.746725 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.746813 kubelet[3407]: W0702 00:17:09.746735 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.746813 kubelet[3407]: E0702 00:17:09.746747 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.747285 kubelet[3407]: E0702 00:17:09.747085 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.747285 kubelet[3407]: W0702 00:17:09.747107 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.747285 kubelet[3407]: E0702 00:17:09.747121 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.747600 kubelet[3407]: E0702 00:17:09.747568 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.747600 kubelet[3407]: W0702 00:17:09.747585 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.747708 kubelet[3407]: E0702 00:17:09.747603 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.748683 kubelet[3407]: E0702 00:17:09.747868 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.748683 kubelet[3407]: W0702 00:17:09.747879 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.748683 kubelet[3407]: E0702 00:17:09.747904 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.748683 kubelet[3407]: E0702 00:17:09.748193 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.748683 kubelet[3407]: W0702 00:17:09.748214 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.748683 kubelet[3407]: E0702 00:17:09.748276 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.750351 kubelet[3407]: E0702 00:17:09.748819 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.750351 kubelet[3407]: W0702 00:17:09.748831 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.750351 kubelet[3407]: E0702 00:17:09.749403 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.750351 kubelet[3407]: E0702 00:17:09.749847 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.750351 kubelet[3407]: W0702 00:17:09.749859 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.750351 kubelet[3407]: E0702 00:17:09.750107 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.751089 kubelet[3407]: E0702 00:17:09.751067 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.751089 kubelet[3407]: W0702 00:17:09.751088 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.751456 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.753272 kubelet[3407]: W0702 00:17:09.751469 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.752279 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.753272 kubelet[3407]: W0702 00:17:09.752289 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.753032 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.753059 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.753075 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.753129 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.753272 kubelet[3407]: W0702 00:17:09.753137 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753272 kubelet[3407]: E0702 00:17:09.753153 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.753843 kubelet[3407]: E0702 00:17:09.753417 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.753843 kubelet[3407]: W0702 00:17:09.753428 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753843 kubelet[3407]: E0702 00:17:09.753445 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.753843 kubelet[3407]: E0702 00:17:09.753697 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.753843 kubelet[3407]: W0702 00:17:09.753706 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.753843 kubelet[3407]: E0702 00:17:09.753730 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.754108 kubelet[3407]: E0702 00:17:09.753975 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.754108 kubelet[3407]: W0702 00:17:09.753985 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.754108 kubelet[3407]: E0702 00:17:09.754012 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.754707 kubelet[3407]: E0702 00:17:09.754685 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.754707 kubelet[3407]: W0702 00:17:09.754705 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.754829 kubelet[3407]: E0702 00:17:09.754723 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.754964 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.757273 kubelet[3407]: W0702 00:17:09.754976 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755054 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755296 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.757273 kubelet[3407]: W0702 00:17:09.755306 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755323 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755656 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.757273 kubelet[3407]: W0702 00:17:09.755666 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755692 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.757273 kubelet[3407]: E0702 00:17:09.755952 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.757786 kubelet[3407]: W0702 00:17:09.755962 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.757786 kubelet[3407]: E0702 00:17:09.755988 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.757786 kubelet[3407]: E0702 00:17:09.756259 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.757786 kubelet[3407]: W0702 00:17:09.756269 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.757786 kubelet[3407]: E0702 00:17:09.756280 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.816262 kubelet[3407]: E0702 00:17:09.815723 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.816262 kubelet[3407]: W0702 00:17:09.815750 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.816262 kubelet[3407]: E0702 00:17:09.815772 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.849436 kubelet[3407]: E0702 00:17:09.849286 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.849436 kubelet[3407]: W0702 00:17:09.849312 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.849436 kubelet[3407]: E0702 00:17:09.849337 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:09.862582 containerd[1977]: time="2024-07-02T00:17:09.862406371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7756b56cd7-xm7dz,Uid:8ef50682-3671-4aa5-aedd-7bc28b6b70ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"74c8a0198a99ebfd2a77904ad678e1e0ded8da5415117eb21ad7ca3388d0417f\"" Jul 2 00:17:09.865947 containerd[1977]: time="2024-07-02T00:17:09.864655249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:17:09.951951 kubelet[3407]: E0702 00:17:09.951710 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:09.951951 kubelet[3407]: W0702 00:17:09.951855 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:09.953035 kubelet[3407]: E0702 00:17:09.951887 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.054511 kubelet[3407]: E0702 00:17:10.054475 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.054511 kubelet[3407]: W0702 00:17:10.054506 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.054744 kubelet[3407]: E0702 00:17:10.054529 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.157661 kubelet[3407]: E0702 00:17:10.156709 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.157661 kubelet[3407]: W0702 00:17:10.156758 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.158600 kubelet[3407]: E0702 00:17:10.158051 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.260875 kubelet[3407]: E0702 00:17:10.259888 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.260875 kubelet[3407]: W0702 00:17:10.259915 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.260875 kubelet[3407]: E0702 00:17:10.259940 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.360846 kubelet[3407]: E0702 00:17:10.360810 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.360846 kubelet[3407]: W0702 00:17:10.360837 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.361034 kubelet[3407]: E0702 00:17:10.360863 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.462151 kubelet[3407]: E0702 00:17:10.462109 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.462151 kubelet[3407]: W0702 00:17:10.462139 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.462408 kubelet[3407]: E0702 00:17:10.462166 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.549452 kubelet[3407]: E0702 00:17:10.531813 3407 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:17:10.549452 kubelet[3407]: E0702 00:17:10.532010 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2757796f-4fda-460f-8bff-028dc203c1fe-node-certs podName:2757796f-4fda-460f-8bff-028dc203c1fe nodeName:}" failed. No retries permitted until 2024-07-02 00:17:11.031980099 +0000 UTC m=+24.000366116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/2757796f-4fda-460f-8bff-028dc203c1fe-node-certs") pod "calico-node-x5twf" (UID: "2757796f-4fda-460f-8bff-028dc203c1fe") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:17:10.564050 kubelet[3407]: E0702 00:17:10.563998 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.564422 kubelet[3407]: W0702 00:17:10.564134 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.564422 kubelet[3407]: E0702 00:17:10.564164 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.666954 kubelet[3407]: E0702 00:17:10.666906 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.666954 kubelet[3407]: W0702 00:17:10.666947 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.667290 kubelet[3407]: E0702 00:17:10.666978 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.771521 kubelet[3407]: E0702 00:17:10.771477 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.771773 kubelet[3407]: W0702 00:17:10.771619 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.771773 kubelet[3407]: E0702 00:17:10.771656 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.875498 kubelet[3407]: E0702 00:17:10.875463 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.875498 kubelet[3407]: W0702 00:17:10.875496 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.875734 kubelet[3407]: E0702 00:17:10.875522 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:10.983813 kubelet[3407]: E0702 00:17:10.983494 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:10.984765 kubelet[3407]: W0702 00:17:10.984689 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:10.984870 kubelet[3407]: E0702 00:17:10.984773 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.106863 kubelet[3407]: E0702 00:17:11.093514 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.106863 kubelet[3407]: W0702 00:17:11.093547 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.106863 kubelet[3407]: E0702 00:17:11.093587 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.106863 kubelet[3407]: E0702 00:17:11.094630 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.106863 kubelet[3407]: W0702 00:17:11.094652 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.106863 kubelet[3407]: E0702 00:17:11.094675 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.107575 kubelet[3407]: E0702 00:17:11.107349 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.107575 kubelet[3407]: W0702 00:17:11.107376 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.107575 kubelet[3407]: E0702 00:17:11.107403 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.112263 kubelet[3407]: E0702 00:17:11.111187 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.112263 kubelet[3407]: W0702 00:17:11.111217 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.112263 kubelet[3407]: E0702 00:17:11.111258 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.112263 kubelet[3407]: E0702 00:17:11.111645 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.112263 kubelet[3407]: W0702 00:17:11.111673 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.112263 kubelet[3407]: E0702 00:17:11.111692 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.142538 kubelet[3407]: E0702 00:17:11.142127 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:11.142538 kubelet[3407]: W0702 00:17:11.142262 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:11.142538 kubelet[3407]: E0702 00:17:11.142292 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:11.275542 kubelet[3407]: E0702 00:17:11.273390 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:11.450968 containerd[1977]: time="2024-07-02T00:17:11.442093602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5twf,Uid:2757796f-4fda-460f-8bff-028dc203c1fe,Namespace:calico-system,Attempt:0,}" Jul 2 00:17:11.623418 containerd[1977]: time="2024-07-02T00:17:11.613060190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:11.623418 containerd[1977]: time="2024-07-02T00:17:11.613151133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:11.623418 containerd[1977]: time="2024-07-02T00:17:11.613191023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:11.623418 containerd[1977]: time="2024-07-02T00:17:11.613212763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:11.690723 systemd[1]: Started cri-containerd-e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f.scope - libcontainer container e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f. Jul 2 00:17:11.787045 containerd[1977]: time="2024-07-02T00:17:11.786036467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5twf,Uid:2757796f-4fda-460f-8bff-028dc203c1fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\"" Jul 2 00:17:13.270535 kubelet[3407]: E0702 00:17:13.267987 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:13.495371 containerd[1977]: time="2024-07-02T00:17:13.495310385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.498356 containerd[1977]: time="2024-07-02T00:17:13.498281118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:17:13.498886 containerd[1977]: time="2024-07-02T00:17:13.498619640Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.511999 containerd[1977]: time="2024-07-02T00:17:13.511124757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.520910 containerd[1977]: time="2024-07-02T00:17:13.520783398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.656080579s" Jul 2 00:17:13.521493 containerd[1977]: time="2024-07-02T00:17:13.521457030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:17:13.526203 containerd[1977]: time="2024-07-02T00:17:13.525690041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:17:13.564097 containerd[1977]: time="2024-07-02T00:17:13.563947457Z" level=info msg="CreateContainer within sandbox \"74c8a0198a99ebfd2a77904ad678e1e0ded8da5415117eb21ad7ca3388d0417f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:17:13.697971 containerd[1977]: time="2024-07-02T00:17:13.697915442Z" level=info msg="CreateContainer within sandbox \"74c8a0198a99ebfd2a77904ad678e1e0ded8da5415117eb21ad7ca3388d0417f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32\"" Jul 2 00:17:13.700390 containerd[1977]: time="2024-07-02T00:17:13.700224736Z" level=info msg="StartContainer for \"901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32\"" Jul 2 00:17:13.821458 systemd[1]: Started cri-containerd-901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32.scope - libcontainer container 901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32. Jul 2 00:17:13.916661 containerd[1977]: time="2024-07-02T00:17:13.916549106Z" level=info msg="StartContainer for \"901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32\" returns successfully" Jul 2 00:17:14.499965 kubelet[3407]: I0702 00:17:14.499837 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7756b56cd7-xm7dz" podStartSLOduration=1.839493355 podStartE2EDuration="5.499817144s" podCreationTimestamp="2024-07-02 00:17:09 +0000 UTC" firstStartedPulling="2024-07-02 00:17:09.864068601 +0000 UTC m=+22.832454612" lastFinishedPulling="2024-07-02 00:17:13.52439239 +0000 UTC m=+26.492778401" observedRunningTime="2024-07-02 00:17:14.497917905 +0000 UTC m=+27.466303947" watchObservedRunningTime="2024-07-02 00:17:14.499817144 +0000 UTC m=+27.468203162" Jul 2 00:17:14.536578 systemd[1]: run-containerd-runc-k8s.io-901a5d8c18330b8a9d2a8c8bb7498725231b076c8337de9ca34bcfda7fe0ba32-runc.3x96r8.mount: Deactivated successfully. Jul 2 00:17:14.547077 kubelet[3407]: E0702 00:17:14.547045 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.547077 kubelet[3407]: W0702 00:17:14.547075 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.547481 kubelet[3407]: E0702 00:17:14.547100 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.547481 kubelet[3407]: E0702 00:17:14.547424 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.547481 kubelet[3407]: W0702 00:17:14.547437 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.547481 kubelet[3407]: E0702 00:17:14.547453 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.547694 kubelet[3407]: E0702 00:17:14.547689 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.547743 kubelet[3407]: W0702 00:17:14.547697 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.547743 kubelet[3407]: E0702 00:17:14.547710 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.548218 kubelet[3407]: E0702 00:17:14.548044 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.548433 kubelet[3407]: W0702 00:17:14.548061 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.548433 kubelet[3407]: E0702 00:17:14.548266 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.549688 kubelet[3407]: E0702 00:17:14.549663 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.549688 kubelet[3407]: W0702 00:17:14.549682 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.549984 kubelet[3407]: E0702 00:17:14.549696 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.549984 kubelet[3407]: E0702 00:17:14.549924 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.549984 kubelet[3407]: W0702 00:17:14.549935 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.549984 kubelet[3407]: E0702 00:17:14.549947 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.550304 kubelet[3407]: E0702 00:17:14.550166 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.550304 kubelet[3407]: W0702 00:17:14.550178 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.550304 kubelet[3407]: E0702 00:17:14.550190 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.550442 kubelet[3407]: E0702 00:17:14.550432 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.550486 kubelet[3407]: W0702 00:17:14.550443 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.550486 kubelet[3407]: E0702 00:17:14.550454 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.550903 kubelet[3407]: E0702 00:17:14.550869 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.550903 kubelet[3407]: W0702 00:17:14.550898 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.551423 kubelet[3407]: E0702 00:17:14.550913 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.551423 kubelet[3407]: E0702 00:17:14.551152 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.551423 kubelet[3407]: W0702 00:17:14.551161 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.551423 kubelet[3407]: E0702 00:17:14.551173 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.552745 kubelet[3407]: E0702 00:17:14.552494 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.552745 kubelet[3407]: W0702 00:17:14.552506 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.552745 kubelet[3407]: E0702 00:17:14.552520 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.552962 kubelet[3407]: E0702 00:17:14.552756 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.552962 kubelet[3407]: W0702 00:17:14.552766 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.552962 kubelet[3407]: E0702 00:17:14.552780 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.561280 kubelet[3407]: E0702 00:17:14.561049 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.561280 kubelet[3407]: W0702 00:17:14.561074 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.561280 kubelet[3407]: E0702 00:17:14.561104 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.566415 kubelet[3407]: E0702 00:17:14.563431 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.566415 kubelet[3407]: W0702 00:17:14.563456 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.566415 kubelet[3407]: E0702 00:17:14.563480 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.566415 kubelet[3407]: E0702 00:17:14.566366 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.566415 kubelet[3407]: W0702 00:17:14.566391 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.566415 kubelet[3407]: E0702 00:17:14.566414 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.645958 kubelet[3407]: E0702 00:17:14.645921 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.645958 kubelet[3407]: W0702 00:17:14.645948 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.646242 kubelet[3407]: E0702 00:17:14.645973 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.647136 kubelet[3407]: E0702 00:17:14.647104 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.647330 kubelet[3407]: W0702 00:17:14.647309 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.647490 kubelet[3407]: E0702 00:17:14.647442 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.648556 kubelet[3407]: E0702 00:17:14.648534 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.648556 kubelet[3407]: W0702 00:17:14.648551 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.648723 kubelet[3407]: E0702 00:17:14.648573 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.648939 kubelet[3407]: E0702 00:17:14.648916 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.648939 kubelet[3407]: W0702 00:17:14.648932 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.649051 kubelet[3407]: E0702 00:17:14.648964 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.650558 kubelet[3407]: E0702 00:17:14.650321 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.650558 kubelet[3407]: W0702 00:17:14.650333 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.650558 kubelet[3407]: E0702 00:17:14.650377 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.652008 kubelet[3407]: E0702 00:17:14.651989 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.652283 kubelet[3407]: W0702 00:17:14.652010 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.653720 kubelet[3407]: E0702 00:17:14.653350 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.653921 kubelet[3407]: E0702 00:17:14.653900 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.653995 kubelet[3407]: W0702 00:17:14.653922 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.654399 kubelet[3407]: E0702 00:17:14.654378 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.655425 kubelet[3407]: E0702 00:17:14.655407 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.655643 kubelet[3407]: W0702 00:17:14.655525 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.656221 kubelet[3407]: E0702 00:17:14.655787 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.656221 kubelet[3407]: E0702 00:17:14.655977 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.656221 kubelet[3407]: W0702 00:17:14.656010 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.656221 kubelet[3407]: E0702 00:17:14.656125 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.656472 kubelet[3407]: E0702 00:17:14.656331 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.656472 kubelet[3407]: W0702 00:17:14.656342 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.656472 kubelet[3407]: E0702 00:17:14.656455 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.656890 kubelet[3407]: E0702 00:17:14.656631 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.656890 kubelet[3407]: W0702 00:17:14.656642 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.656890 kubelet[3407]: E0702 00:17:14.656678 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.657051 kubelet[3407]: E0702 00:17:14.656990 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.657051 kubelet[3407]: W0702 00:17:14.657002 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.657051 kubelet[3407]: E0702 00:17:14.657019 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.657492 kubelet[3407]: E0702 00:17:14.657369 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.657492 kubelet[3407]: W0702 00:17:14.657388 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.657492 kubelet[3407]: E0702 00:17:14.657414 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.658174 kubelet[3407]: E0702 00:17:14.658052 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.658174 kubelet[3407]: W0702 00:17:14.658064 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.658174 kubelet[3407]: E0702 00:17:14.658112 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.658537 kubelet[3407]: E0702 00:17:14.658444 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.658537 kubelet[3407]: W0702 00:17:14.658457 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.658537 kubelet[3407]: E0702 00:17:14.658471 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.659072 kubelet[3407]: E0702 00:17:14.658731 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.659072 kubelet[3407]: W0702 00:17:14.658743 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.659072 kubelet[3407]: E0702 00:17:14.658756 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.659184 kubelet[3407]: E0702 00:17:14.659076 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.659184 kubelet[3407]: W0702 00:17:14.659087 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.659184 kubelet[3407]: E0702 00:17:14.659101 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:14.660246 kubelet[3407]: E0702 00:17:14.659575 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:17:14.660246 kubelet[3407]: W0702 00:17:14.659588 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:17:14.660246 kubelet[3407]: E0702 00:17:14.659600 3407 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:17:15.137782 containerd[1977]: time="2024-07-02T00:17:15.137728106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:15.139600 containerd[1977]: time="2024-07-02T00:17:15.139351079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:17:15.141265 containerd[1977]: time="2024-07-02T00:17:15.141098985Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:15.146224 containerd[1977]: time="2024-07-02T00:17:15.145815598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:15.147719 containerd[1977]: time="2024-07-02T00:17:15.147544378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.621741257s" Jul 2 00:17:15.147719 containerd[1977]: time="2024-07-02T00:17:15.147592747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:17:15.158098 containerd[1977]: time="2024-07-02T00:17:15.157363234Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:17:15.188839 containerd[1977]: time="2024-07-02T00:17:15.188770625Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb\"" Jul 2 00:17:15.194302 containerd[1977]: time="2024-07-02T00:17:15.191473671Z" level=info msg="StartContainer for \"f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb\"" Jul 2 00:17:15.268196 kubelet[3407]: E0702 00:17:15.267843 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:15.322782 systemd[1]: Started cri-containerd-f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb.scope - libcontainer container f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb. Jul 2 00:17:15.429651 containerd[1977]: time="2024-07-02T00:17:15.429530451Z" level=info msg="StartContainer for \"f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb\" returns successfully" Jul 2 00:17:15.463794 systemd[1]: cri-containerd-f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb.scope: Deactivated successfully. Jul 2 00:17:15.483415 kubelet[3407]: I0702 00:17:15.483385 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:17:15.559346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb-rootfs.mount: Deactivated successfully. Jul 2 00:17:15.838189 containerd[1977]: time="2024-07-02T00:17:15.829096996Z" level=info msg="shim disconnected" id=f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb namespace=k8s.io Jul 2 00:17:15.838189 containerd[1977]: time="2024-07-02T00:17:15.838107446Z" level=warning msg="cleaning up after shim disconnected" id=f70aad353166ca07daf0714ab710b342bcea0ea9991a770eb1cfbfae7e9fb6bb namespace=k8s.io Jul 2 00:17:15.838189 containerd[1977]: time="2024-07-02T00:17:15.838128025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:17:16.492082 containerd[1977]: time="2024-07-02T00:17:16.492036417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:17:17.268284 kubelet[3407]: E0702 00:17:17.267989 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:19.269057 kubelet[3407]: E0702 00:17:19.267421 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:21.269902 kubelet[3407]: E0702 00:17:21.269108 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:23.267891 kubelet[3407]: E0702 00:17:23.267839 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:23.862265 containerd[1977]: time="2024-07-02T00:17:23.862199647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:23.864402 containerd[1977]: time="2024-07-02T00:17:23.864221096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:17:23.867367 containerd[1977]: time="2024-07-02T00:17:23.866893245Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:23.870918 containerd[1977]: time="2024-07-02T00:17:23.870854785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:23.872117 containerd[1977]: time="2024-07-02T00:17:23.872074190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 7.379990256s" Jul 2 00:17:23.872930 containerd[1977]: time="2024-07-02T00:17:23.872760220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:17:23.876814 containerd[1977]: time="2024-07-02T00:17:23.876776205Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:17:23.930180 containerd[1977]: time="2024-07-02T00:17:23.929920464Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665\"" Jul 2 00:17:23.935571 containerd[1977]: time="2024-07-02T00:17:23.934413197Z" level=info msg="StartContainer for \"4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665\"" Jul 2 00:17:24.071837 systemd[1]: Started cri-containerd-4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665.scope - libcontainer container 4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665. Jul 2 00:17:24.172376 containerd[1977]: time="2024-07-02T00:17:24.172315526Z" level=info msg="StartContainer for \"4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665\" returns successfully" Jul 2 00:17:25.271603 kubelet[3407]: E0702 00:17:25.267550 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:27.269253 kubelet[3407]: E0702 00:17:27.267807 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:28.257836 systemd[1]: cri-containerd-4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665.scope: Deactivated successfully. Jul 2 00:17:28.325361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665-rootfs.mount: Deactivated successfully. Jul 2 00:17:28.337350 containerd[1977]: time="2024-07-02T00:17:28.337200549Z" level=info msg="shim disconnected" id=4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665 namespace=k8s.io Jul 2 00:17:28.337350 containerd[1977]: time="2024-07-02T00:17:28.337303736Z" level=warning msg="cleaning up after shim disconnected" id=4f58fdeef286545d183cc067c1dff1f2934c9adcca0b8f9997c66cbf4014c665 namespace=k8s.io Jul 2 00:17:28.337350 containerd[1977]: time="2024-07-02T00:17:28.337321744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:17:28.345094 kubelet[3407]: I0702 00:17:28.345011 3407 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:17:28.413762 kubelet[3407]: I0702 00:17:28.413716 3407 topology_manager.go:215] "Topology Admit Handler" podUID="cf1b2163-9933-4743-b885-dee3d2f16bf0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jfdk2" Jul 2 00:17:28.427257 kubelet[3407]: I0702 00:17:28.424629 3407 topology_manager.go:215] "Topology Admit Handler" podUID="951ede4b-50c9-48d8-8b1e-92e6c2b137f6" podNamespace="calico-system" podName="calico-kube-controllers-55cb76f7f7-v94d6" Jul 2 00:17:28.427257 kubelet[3407]: I0702 00:17:28.426491 3407 topology_manager.go:215] "Topology Admit Handler" podUID="545bdab3-f4e9-4d98-8606-4a0a243e8137" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jds8w" Jul 2 00:17:28.431475 systemd[1]: Created slice kubepods-burstable-podcf1b2163_9933_4743_b885_dee3d2f16bf0.slice - libcontainer container kubepods-burstable-podcf1b2163_9933_4743_b885_dee3d2f16bf0.slice. Jul 2 00:17:28.448580 systemd[1]: Created slice kubepods-besteffort-pod951ede4b_50c9_48d8_8b1e_92e6c2b137f6.slice - libcontainer container kubepods-besteffort-pod951ede4b_50c9_48d8_8b1e_92e6c2b137f6.slice. Jul 2 00:17:28.465687 systemd[1]: Created slice kubepods-burstable-pod545bdab3_f4e9_4d98_8606_4a0a243e8137.slice - libcontainer container kubepods-burstable-pod545bdab3_f4e9_4d98_8606_4a0a243e8137.slice. Jul 2 00:17:28.483170 kubelet[3407]: I0702 00:17:28.482667 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/951ede4b-50c9-48d8-8b1e-92e6c2b137f6-tigera-ca-bundle\") pod \"calico-kube-controllers-55cb76f7f7-v94d6\" (UID: \"951ede4b-50c9-48d8-8b1e-92e6c2b137f6\") " pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" Jul 2 00:17:28.483170 kubelet[3407]: I0702 00:17:28.482769 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8dl\" (UniqueName: \"kubernetes.io/projected/545bdab3-f4e9-4d98-8606-4a0a243e8137-kube-api-access-rv8dl\") pod \"coredns-7db6d8ff4d-jds8w\" (UID: \"545bdab3-f4e9-4d98-8606-4a0a243e8137\") " pod="kube-system/coredns-7db6d8ff4d-jds8w" Jul 2 00:17:28.483170 kubelet[3407]: I0702 00:17:28.482839 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jv4x\" (UniqueName: \"kubernetes.io/projected/cf1b2163-9933-4743-b885-dee3d2f16bf0-kube-api-access-4jv4x\") pod \"coredns-7db6d8ff4d-jfdk2\" (UID: \"cf1b2163-9933-4743-b885-dee3d2f16bf0\") " pod="kube-system/coredns-7db6d8ff4d-jfdk2" Jul 2 00:17:28.483170 kubelet[3407]: I0702 00:17:28.482871 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1b2163-9933-4743-b885-dee3d2f16bf0-config-volume\") pod \"coredns-7db6d8ff4d-jfdk2\" (UID: \"cf1b2163-9933-4743-b885-dee3d2f16bf0\") " pod="kube-system/coredns-7db6d8ff4d-jfdk2" Jul 2 00:17:28.483170 kubelet[3407]: I0702 00:17:28.482928 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrrb\" (UniqueName: \"kubernetes.io/projected/951ede4b-50c9-48d8-8b1e-92e6c2b137f6-kube-api-access-cgrrb\") pod \"calico-kube-controllers-55cb76f7f7-v94d6\" (UID: \"951ede4b-50c9-48d8-8b1e-92e6c2b137f6\") " pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" Jul 2 00:17:28.483935 kubelet[3407]: I0702 00:17:28.482953 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/545bdab3-f4e9-4d98-8606-4a0a243e8137-config-volume\") pod \"coredns-7db6d8ff4d-jds8w\" (UID: \"545bdab3-f4e9-4d98-8606-4a0a243e8137\") " pod="kube-system/coredns-7db6d8ff4d-jds8w" Jul 2 00:17:28.545696 containerd[1977]: time="2024-07-02T00:17:28.543968362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:17:28.743373 containerd[1977]: time="2024-07-02T00:17:28.743330134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jfdk2,Uid:cf1b2163-9933-4743-b885-dee3d2f16bf0,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:28.759380 containerd[1977]: time="2024-07-02T00:17:28.759286390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cb76f7f7-v94d6,Uid:951ede4b-50c9-48d8-8b1e-92e6c2b137f6,Namespace:calico-system,Attempt:0,}" Jul 2 00:17:28.787253 containerd[1977]: time="2024-07-02T00:17:28.783487330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jds8w,Uid:545bdab3-f4e9-4d98-8606-4a0a243e8137,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:29.288696 systemd[1]: Created slice kubepods-besteffort-pod9b7f5e58_3c44_472f_85f0_c915eb069355.slice - libcontainer container kubepods-besteffort-pod9b7f5e58_3c44_472f_85f0_c915eb069355.slice. Jul 2 00:17:29.293710 containerd[1977]: time="2024-07-02T00:17:29.293542121Z" level=error msg="Failed to destroy network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.299108 containerd[1977]: time="2024-07-02T00:17:29.298600867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjx52,Uid:9b7f5e58-3c44-472f-85f0-c915eb069355,Namespace:calico-system,Attempt:0,}" Jul 2 00:17:29.300449 containerd[1977]: time="2024-07-02T00:17:29.300407653Z" level=error msg="Failed to destroy network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.301177 containerd[1977]: time="2024-07-02T00:17:29.300807304Z" level=error msg="encountered an error cleaning up failed sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.301283 containerd[1977]: time="2024-07-02T00:17:29.301212498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cb76f7f7-v94d6,Uid:951ede4b-50c9-48d8-8b1e-92e6c2b137f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.301391 containerd[1977]: time="2024-07-02T00:17:29.300896277Z" level=error msg="Failed to destroy network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.301850 containerd[1977]: time="2024-07-02T00:17:29.301712836Z" level=error msg="encountered an error cleaning up failed sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.301932 containerd[1977]: time="2024-07-02T00:17:29.301877100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jfdk2,Uid:cf1b2163-9933-4743-b885-dee3d2f16bf0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.302182 kubelet[3407]: E0702 00:17:29.302109 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.302725 kubelet[3407]: E0702 00:17:29.302276 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" Jul 2 00:17:29.302725 kubelet[3407]: E0702 00:17:29.302109 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.302725 kubelet[3407]: E0702 00:17:29.302324 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" Jul 2 00:17:29.302725 kubelet[3407]: E0702 00:17:29.302354 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jfdk2" Jul 2 00:17:29.303407 containerd[1977]: time="2024-07-02T00:17:29.302580529Z" level=error msg="encountered an error cleaning up failed sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.303407 containerd[1977]: time="2024-07-02T00:17:29.302632251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jds8w,Uid:545bdab3-f4e9-4d98-8606-4a0a243e8137,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.303498 kubelet[3407]: E0702 00:17:29.302379 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jfdk2" Jul 2 00:17:29.303498 kubelet[3407]: E0702 00:17:29.302394 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55cb76f7f7-v94d6_calico-system(951ede4b-50c9-48d8-8b1e-92e6c2b137f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55cb76f7f7-v94d6_calico-system(951ede4b-50c9-48d8-8b1e-92e6c2b137f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" podUID="951ede4b-50c9-48d8-8b1e-92e6c2b137f6" Jul 2 00:17:29.303498 kubelet[3407]: E0702 00:17:29.302413 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jfdk2_kube-system(cf1b2163-9933-4743-b885-dee3d2f16bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jfdk2_kube-system(cf1b2163-9933-4743-b885-dee3d2f16bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jfdk2" podUID="cf1b2163-9933-4743-b885-dee3d2f16bf0" Jul 2 00:17:29.304856 kubelet[3407]: E0702 00:17:29.303121 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.304856 kubelet[3407]: E0702 00:17:29.303164 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jds8w" Jul 2 00:17:29.304856 kubelet[3407]: E0702 00:17:29.303190 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jds8w" Jul 2 00:17:29.304962 kubelet[3407]: E0702 00:17:29.304730 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jds8w_kube-system(545bdab3-f4e9-4d98-8606-4a0a243e8137)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jds8w_kube-system(545bdab3-f4e9-4d98-8606-4a0a243e8137)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jds8w" podUID="545bdab3-f4e9-4d98-8606-4a0a243e8137" Jul 2 00:17:29.327769 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b-shm.mount: Deactivated successfully. Jul 2 00:17:29.452528 containerd[1977]: time="2024-07-02T00:17:29.452283095Z" level=error msg="Failed to destroy network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.455389 containerd[1977]: time="2024-07-02T00:17:29.452882545Z" level=error msg="encountered an error cleaning up failed sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.455389 containerd[1977]: time="2024-07-02T00:17:29.452955090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjx52,Uid:9b7f5e58-3c44-472f-85f0-c915eb069355,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.459957 kubelet[3407]: E0702 00:17:29.455464 3407 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.459957 kubelet[3407]: E0702 00:17:29.459599 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:29.459957 kubelet[3407]: E0702 00:17:29.459630 3407 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjx52" Jul 2 00:17:29.462038 kubelet[3407]: E0702 00:17:29.459683 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rjx52_calico-system(9b7f5e58-3c44-472f-85f0-c915eb069355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rjx52_calico-system(9b7f5e58-3c44-472f-85f0-c915eb069355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:29.461954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861-shm.mount: Deactivated successfully. Jul 2 00:17:29.547118 kubelet[3407]: I0702 00:17:29.546849 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:29.548647 kubelet[3407]: I0702 00:17:29.548350 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:29.555418 kubelet[3407]: I0702 00:17:29.555107 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:29.584289 containerd[1977]: time="2024-07-02T00:17:29.583496393Z" level=info msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" Jul 2 00:17:29.584289 containerd[1977]: time="2024-07-02T00:17:29.584196274Z" level=info msg="Ensure that sandbox 7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861 in task-service has been cleanup successfully" Jul 2 00:17:29.593537 containerd[1977]: time="2024-07-02T00:17:29.592167681Z" level=info msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" Jul 2 00:17:29.593537 containerd[1977]: time="2024-07-02T00:17:29.592287054Z" level=info msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" Jul 2 00:17:29.593537 containerd[1977]: time="2024-07-02T00:17:29.592476145Z" level=info msg="Ensure that sandbox 38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b in task-service has been cleanup successfully" Jul 2 00:17:29.594067 kubelet[3407]: I0702 00:17:29.594038 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:29.596947 containerd[1977]: time="2024-07-02T00:17:29.594355771Z" level=info msg="Ensure that sandbox 53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8 in task-service has been cleanup successfully" Jul 2 00:17:29.601912 containerd[1977]: time="2024-07-02T00:17:29.601723738Z" level=info msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" Jul 2 00:17:29.602081 containerd[1977]: time="2024-07-02T00:17:29.601990255Z" level=info msg="Ensure that sandbox abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63 in task-service has been cleanup successfully" Jul 2 00:17:29.718936 containerd[1977]: time="2024-07-02T00:17:29.718524547Z" level=error msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" failed" error="failed to destroy network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.724832 kubelet[3407]: E0702 00:17:29.723847 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:29.724832 kubelet[3407]: E0702 00:17:29.723918 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8"} Jul 2 00:17:29.724832 kubelet[3407]: E0702 00:17:29.724019 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"951ede4b-50c9-48d8-8b1e-92e6c2b137f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:17:29.724832 kubelet[3407]: E0702 00:17:29.724055 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"951ede4b-50c9-48d8-8b1e-92e6c2b137f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" podUID="951ede4b-50c9-48d8-8b1e-92e6c2b137f6" Jul 2 00:17:29.729832 containerd[1977]: time="2024-07-02T00:17:29.729543376Z" level=error msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" failed" error="failed to destroy network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.731256 kubelet[3407]: E0702 00:17:29.730165 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:29.731256 kubelet[3407]: E0702 00:17:29.730266 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63"} Jul 2 00:17:29.731256 kubelet[3407]: E0702 00:17:29.730326 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"545bdab3-f4e9-4d98-8606-4a0a243e8137\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:17:29.731256 kubelet[3407]: E0702 00:17:29.730361 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"545bdab3-f4e9-4d98-8606-4a0a243e8137\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jds8w" podUID="545bdab3-f4e9-4d98-8606-4a0a243e8137" Jul 2 00:17:29.756867 containerd[1977]: time="2024-07-02T00:17:29.756815623Z" level=error msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" failed" error="failed to destroy network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.757333 containerd[1977]: time="2024-07-02T00:17:29.757282888Z" level=error msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" failed" error="failed to destroy network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:17:29.757811 kubelet[3407]: E0702 00:17:29.757656 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:29.757900 kubelet[3407]: E0702 00:17:29.757301 3407 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:29.757900 kubelet[3407]: E0702 00:17:29.757868 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861"} Jul 2 00:17:29.757983 kubelet[3407]: E0702 00:17:29.757916 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b7f5e58-3c44-472f-85f0-c915eb069355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:17:29.757983 kubelet[3407]: E0702 00:17:29.757949 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b7f5e58-3c44-472f-85f0-c915eb069355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjx52" podUID="9b7f5e58-3c44-472f-85f0-c915eb069355" Jul 2 00:17:29.758138 kubelet[3407]: E0702 00:17:29.757998 3407 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b"} Jul 2 00:17:29.758138 kubelet[3407]: E0702 00:17:29.758026 3407 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf1b2163-9933-4743-b885-dee3d2f16bf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:17:29.758138 kubelet[3407]: E0702 00:17:29.758048 3407 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf1b2163-9933-4743-b885-dee3d2f16bf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jfdk2" podUID="cf1b2163-9933-4743-b885-dee3d2f16bf0" Jul 2 00:17:31.236259 kubelet[3407]: I0702 00:17:31.235154 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:17:38.518804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217296928.mount: Deactivated successfully. Jul 2 00:17:38.622358 containerd[1977]: time="2024-07-02T00:17:38.621500885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:17:38.625258 containerd[1977]: time="2024-07-02T00:17:38.625135448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.653320 containerd[1977]: time="2024-07-02T00:17:38.653274653Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.657065 containerd[1977]: time="2024-07-02T00:17:38.657022582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:38.657966 containerd[1977]: time="2024-07-02T00:17:38.657789140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 10.113776692s" Jul 2 00:17:38.657966 containerd[1977]: time="2024-07-02T00:17:38.657962169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:17:38.754840 containerd[1977]: time="2024-07-02T00:17:38.754334801Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:17:38.828351 containerd[1977]: time="2024-07-02T00:17:38.828213019Z" level=info msg="CreateContainer within sandbox \"e473ff4f046626e4ae06cdc3ffcf2996b7af79296975d59184fbba5b0df6a81f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6\"" Jul 2 00:17:38.831441 containerd[1977]: time="2024-07-02T00:17:38.831401316Z" level=info msg="StartContainer for \"41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6\"" Jul 2 00:17:38.940479 systemd[1]: Started cri-containerd-41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6.scope - libcontainer container 41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6. Jul 2 00:17:39.112520 containerd[1977]: time="2024-07-02T00:17:39.112081026Z" level=info msg="StartContainer for \"41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6\" returns successfully" Jul 2 00:17:39.339728 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:17:39.339990 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:17:39.995816 systemd[1]: run-containerd-runc-k8s.io-41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6-runc.YxYdAn.mount: Deactivated successfully. Jul 2 00:17:40.272556 containerd[1977]: time="2024-07-02T00:17:40.271466722Z" level=info msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" Jul 2 00:17:40.467696 kubelet[3407]: I0702 00:17:40.463734 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x5twf" podStartSLOduration=4.573714247 podStartE2EDuration="31.433617873s" podCreationTimestamp="2024-07-02 00:17:09 +0000 UTC" firstStartedPulling="2024-07-02 00:17:11.799584316 +0000 UTC m=+24.767970323" lastFinishedPulling="2024-07-02 00:17:38.659487948 +0000 UTC m=+51.627873949" observedRunningTime="2024-07-02 00:17:39.807632672 +0000 UTC m=+52.776018712" watchObservedRunningTime="2024-07-02 00:17:40.433617873 +0000 UTC m=+53.402003915" Jul 2 00:17:40.816628 systemd[1]: run-containerd-runc-k8s.io-41371deded2792e3bf79632d6acc26a1942e9d6ad42b96dbaaded3818cd0cdf6-runc.gqqtwv.mount: Deactivated successfully. Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.437 [INFO][4488] k8s.go 608: Cleaning up netns ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.442 [INFO][4488] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" iface="eth0" netns="/var/run/netns/cni-76ee12f7-2046-91dd-c0a7-c636da20f8a5" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.442 [INFO][4488] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" iface="eth0" netns="/var/run/netns/cni-76ee12f7-2046-91dd-c0a7-c636da20f8a5" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.443 [INFO][4488] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" iface="eth0" netns="/var/run/netns/cni-76ee12f7-2046-91dd-c0a7-c636da20f8a5" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.443 [INFO][4488] k8s.go 615: Releasing IP address(es) ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.443 [INFO][4488] utils.go 188: Calico CNI releasing IP address ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.888 [INFO][4494] ipam_plugin.go 411: Releasing address using handleID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.891 [INFO][4494] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.891 [INFO][4494] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.910 [WARNING][4494] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.910 [INFO][4494] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.915 [INFO][4494] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:40.921648 containerd[1977]: 2024-07-02 00:17:40.917 [INFO][4488] k8s.go 621: Teardown processing complete. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:40.923328 containerd[1977]: time="2024-07-02T00:17:40.921992476Z" level=info msg="TearDown network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" successfully" Jul 2 00:17:40.923328 containerd[1977]: time="2024-07-02T00:17:40.922026576Z" level=info msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" returns successfully" Jul 2 00:17:40.926025 containerd[1977]: time="2024-07-02T00:17:40.925660677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jds8w,Uid:545bdab3-f4e9-4d98-8606-4a0a243e8137,Namespace:kube-system,Attempt:1,}" Jul 2 00:17:40.933102 systemd[1]: run-netns-cni\x2d76ee12f7\x2d2046\x2d91dd\x2dc0a7\x2dc636da20f8a5.mount: Deactivated successfully. Jul 2 00:17:41.276112 containerd[1977]: time="2024-07-02T00:17:41.275325955Z" level=info msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" Jul 2 00:17:41.396175 (udev-worker)[4429]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:17:41.409844 systemd-networkd[1810]: calic40f28b2b2d: Link UP Jul 2 00:17:41.415272 systemd-networkd[1810]: calic40f28b2b2d: Gained carrier Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.122 [INFO][4525] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.146 [INFO][4525] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0 coredns-7db6d8ff4d- kube-system 545bdab3-f4e9-4d98-8606-4a0a243e8137 748 0 2024-07-02 00:17:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-160 coredns-7db6d8ff4d-jds8w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic40f28b2b2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.146 [INFO][4525] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.229 [INFO][4532] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" HandleID="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.247 [INFO][4532] ipam_plugin.go 264: Auto assigning IP ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" HandleID="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114110), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-160", "pod":"coredns-7db6d8ff4d-jds8w", "timestamp":"2024-07-02 00:17:41.229065214 +0000 UTC"}, Hostname:"ip-172-31-23-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.247 [INFO][4532] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.247 [INFO][4532] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.247 [INFO][4532] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-160' Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.253 [INFO][4532] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.265 [INFO][4532] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.285 [INFO][4532] ipam.go 489: Trying affinity for 192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.292 [INFO][4532] ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.304 [INFO][4532] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.305 [INFO][4532] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.308 [INFO][4532] ipam.go 1685: Creating new handle: k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061 Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.327 [INFO][4532] ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.338 [INFO][4532] ipam.go 1216: Successfully claimed IPs: [192.168.121.1/26] block=192.168.121.0/26 handle="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.340 [INFO][4532] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.1/26] handle="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" host="ip-172-31-23-160" Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.340 [INFO][4532] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:41.499400 containerd[1977]: 2024-07-02 00:17:41.340 [INFO][4532] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.121.1/26] IPv6=[] ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" HandleID="k8s-pod-network.a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.364 [INFO][4525] k8s.go 386: Populated endpoint ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"545bdab3-f4e9-4d98-8606-4a0a243e8137", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"", Pod:"coredns-7db6d8ff4d-jds8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic40f28b2b2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.365 [INFO][4525] k8s.go 387: Calico CNI using IPs: [192.168.121.1/32] ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.366 [INFO][4525] dataplane_linux.go 68: Setting the host side veth name to calic40f28b2b2d ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.410 [INFO][4525] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.424 [INFO][4525] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"545bdab3-f4e9-4d98-8606-4a0a243e8137", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061", Pod:"coredns-7db6d8ff4d-jds8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic40f28b2b2d", MAC:"82:50:4a:a7:a6:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:41.503736 containerd[1977]: 2024-07-02 00:17:41.491 [INFO][4525] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jds8w" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.523 [INFO][4561] k8s.go 608: Cleaning up netns ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.529 [INFO][4561] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" iface="eth0" netns="/var/run/netns/cni-c2d5c0b4-297f-f0a5-0ec2-82aa35072fd2" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.530 [INFO][4561] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" iface="eth0" netns="/var/run/netns/cni-c2d5c0b4-297f-f0a5-0ec2-82aa35072fd2" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.531 [INFO][4561] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" iface="eth0" netns="/var/run/netns/cni-c2d5c0b4-297f-f0a5-0ec2-82aa35072fd2" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.531 [INFO][4561] k8s.go 615: Releasing IP address(es) ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.531 [INFO][4561] utils.go 188: Calico CNI releasing IP address ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.720 [INFO][4602] ipam_plugin.go 411: Releasing address using handleID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.724 [INFO][4602] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.725 [INFO][4602] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.760 [WARNING][4602] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.761 [INFO][4602] ipam_plugin.go 439: Releasing address using workloadID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.766 [INFO][4602] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:41.781920 containerd[1977]: 2024-07-02 00:17:41.771 [INFO][4561] k8s.go 621: Teardown processing complete. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:41.784849 containerd[1977]: time="2024-07-02T00:17:41.784673485Z" level=info msg="TearDown network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" successfully" Jul 2 00:17:41.784849 containerd[1977]: time="2024-07-02T00:17:41.784738106Z" level=info msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" returns successfully" Jul 2 00:17:41.788541 containerd[1977]: time="2024-07-02T00:17:41.788490622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jfdk2,Uid:cf1b2163-9933-4743-b885-dee3d2f16bf0,Namespace:kube-system,Attempt:1,}" Jul 2 00:17:41.802188 systemd[1]: run-netns-cni\x2dc2d5c0b4\x2d297f\x2df0a5\x2d0ec2\x2d82aa35072fd2.mount: Deactivated successfully. Jul 2 00:17:41.830846 containerd[1977]: time="2024-07-02T00:17:41.830156944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:41.830846 containerd[1977]: time="2024-07-02T00:17:41.830268401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:41.830846 containerd[1977]: time="2024-07-02T00:17:41.830302294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:41.830846 containerd[1977]: time="2024-07-02T00:17:41.830322535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:41.912672 systemd[1]: Started cri-containerd-a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061.scope - libcontainer container a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061. Jul 2 00:17:42.150292 containerd[1977]: time="2024-07-02T00:17:42.150154983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jds8w,Uid:545bdab3-f4e9-4d98-8606-4a0a243e8137,Namespace:kube-system,Attempt:1,} returns sandbox id \"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061\"" Jul 2 00:17:42.184340 containerd[1977]: time="2024-07-02T00:17:42.178654541Z" level=info msg="CreateContainer within sandbox \"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:17:42.251254 containerd[1977]: time="2024-07-02T00:17:42.251182123Z" level=info msg="CreateContainer within sandbox \"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae6e3c28a2514d04d90cd4210101cb6d35dbf2ddd38c900487401ae0159991e1\"" Jul 2 00:17:42.252416 containerd[1977]: time="2024-07-02T00:17:42.252379801Z" level=info msg="StartContainer for \"ae6e3c28a2514d04d90cd4210101cb6d35dbf2ddd38c900487401ae0159991e1\"" Jul 2 00:17:42.310892 systemd[1]: Started cri-containerd-ae6e3c28a2514d04d90cd4210101cb6d35dbf2ddd38c900487401ae0159991e1.scope - libcontainer container ae6e3c28a2514d04d90cd4210101cb6d35dbf2ddd38c900487401ae0159991e1. Jul 2 00:17:42.352526 systemd-networkd[1810]: cali4d18caf649d: Link UP Jul 2 00:17:42.355122 systemd-networkd[1810]: cali4d18caf649d: Gained carrier Jul 2 00:17:42.355981 (udev-worker)[4426]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.018 [INFO][4652] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.048 [INFO][4652] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0 coredns-7db6d8ff4d- kube-system cf1b2163-9933-4743-b885-dee3d2f16bf0 755 0 2024-07-02 00:17:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-160 coredns-7db6d8ff4d-jfdk2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d18caf649d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.048 [INFO][4652] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.147 [INFO][4682] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" HandleID="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.192 [INFO][4682] ipam_plugin.go 264: Auto assigning IP ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" HandleID="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031abd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-160", "pod":"coredns-7db6d8ff4d-jfdk2", "timestamp":"2024-07-02 00:17:42.14788703 +0000 UTC"}, Hostname:"ip-172-31-23-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.192 [INFO][4682] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.192 [INFO][4682] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.192 [INFO][4682] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-160' Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.198 [INFO][4682] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.219 [INFO][4682] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.266 [INFO][4682] ipam.go 489: Trying affinity for 192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.274 [INFO][4682] ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.284 [INFO][4682] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.284 [INFO][4682] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.289 [INFO][4682] ipam.go 1685: Creating new handle: k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.321 [INFO][4682] ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.342 [INFO][4682] ipam.go 1216: Successfully claimed IPs: [192.168.121.2/26] block=192.168.121.0/26 handle="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.342 [INFO][4682] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.2/26] handle="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" host="ip-172-31-23-160" Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.342 [INFO][4682] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:42.380949 containerd[1977]: 2024-07-02 00:17:42.342 [INFO][4682] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.121.2/26] IPv6=[] ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" HandleID="k8s-pod-network.deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.347 [INFO][4652] k8s.go 386: Populated endpoint ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1b2163-9933-4743-b885-dee3d2f16bf0", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"", Pod:"coredns-7db6d8ff4d-jfdk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d18caf649d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.348 [INFO][4652] k8s.go 387: Calico CNI using IPs: [192.168.121.2/32] ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.348 [INFO][4652] dataplane_linux.go 68: Setting the host side veth name to cali4d18caf649d ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.353 [INFO][4652] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.355 [INFO][4652] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1b2163-9933-4743-b885-dee3d2f16bf0", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb", Pod:"coredns-7db6d8ff4d-jfdk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d18caf649d", MAC:"36:07:0d:0a:26:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:42.390874 containerd[1977]: 2024-07-02 00:17:42.375 [INFO][4652] k8s.go 500: Wrote updated endpoint to datastore ContainerID="deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jfdk2" WorkloadEndpoint="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:42.461519 containerd[1977]: time="2024-07-02T00:17:42.459602259Z" level=info msg="StartContainer for \"ae6e3c28a2514d04d90cd4210101cb6d35dbf2ddd38c900487401ae0159991e1\" returns successfully" Jul 2 00:17:42.516176 containerd[1977]: time="2024-07-02T00:17:42.514689369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:42.516176 containerd[1977]: time="2024-07-02T00:17:42.514776095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:42.516176 containerd[1977]: time="2024-07-02T00:17:42.514808052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:42.516176 containerd[1977]: time="2024-07-02T00:17:42.514825016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:42.607595 systemd[1]: Started cri-containerd-deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb.scope - libcontainer container deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb. Jul 2 00:17:42.684533 systemd-networkd[1810]: calic40f28b2b2d: Gained IPv6LL Jul 2 00:17:42.755018 containerd[1977]: time="2024-07-02T00:17:42.754884088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jfdk2,Uid:cf1b2163-9933-4743-b885-dee3d2f16bf0,Namespace:kube-system,Attempt:1,} returns sandbox id \"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb\"" Jul 2 00:17:42.766526 containerd[1977]: time="2024-07-02T00:17:42.766479958Z" level=info msg="CreateContainer within sandbox \"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:17:42.835250 containerd[1977]: time="2024-07-02T00:17:42.834807488Z" level=info msg="CreateContainer within sandbox \"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99\"" Jul 2 00:17:42.838264 containerd[1977]: time="2024-07-02T00:17:42.837366288Z" level=info msg="StartContainer for \"f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99\"" Jul 2 00:17:42.926682 systemd[1]: run-containerd-runc-k8s.io-f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99-runc.j3G7Le.mount: Deactivated successfully. Jul 2 00:17:42.940534 systemd[1]: Started cri-containerd-f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99.scope - libcontainer container f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99. Jul 2 00:17:43.012697 containerd[1977]: time="2024-07-02T00:17:43.012578488Z" level=info msg="StartContainer for \"f40183eac66c059695c6fb546c83a375a8e4e377ce0fdf6e79e7f6217c354d99\" returns successfully" Jul 2 00:17:43.705463 systemd-networkd[1810]: cali4d18caf649d: Gained IPv6LL Jul 2 00:17:43.870771 kubelet[3407]: I0702 00:17:43.870696 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jfdk2" podStartSLOduration=42.870601948 podStartE2EDuration="42.870601948s" podCreationTimestamp="2024-07-02 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:43.8600798 +0000 UTC m=+56.828465820" watchObservedRunningTime="2024-07-02 00:17:43.870601948 +0000 UTC m=+56.838987970" Jul 2 00:17:43.871378 kubelet[3407]: I0702 00:17:43.870878 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jds8w" podStartSLOduration=42.870869328 podStartE2EDuration="42.870869328s" podCreationTimestamp="2024-07-02 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:42.921745745 +0000 UTC m=+55.890131765" watchObservedRunningTime="2024-07-02 00:17:43.870869328 +0000 UTC m=+56.839255347" Jul 2 00:17:43.931024 systemd-networkd[1810]: vxlan.calico: Link UP Jul 2 00:17:43.931039 systemd-networkd[1810]: vxlan.calico: Gained carrier Jul 2 00:17:44.314702 containerd[1977]: time="2024-07-02T00:17:44.314639669Z" level=info msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.475 [INFO][4950] k8s.go 608: Cleaning up netns ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.475 [INFO][4950] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" iface="eth0" netns="/var/run/netns/cni-b3370a4f-66cc-82e5-6803-99b53d0e1d29" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.476 [INFO][4950] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" iface="eth0" netns="/var/run/netns/cni-b3370a4f-66cc-82e5-6803-99b53d0e1d29" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.477 [INFO][4950] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" iface="eth0" netns="/var/run/netns/cni-b3370a4f-66cc-82e5-6803-99b53d0e1d29" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.477 [INFO][4950] k8s.go 615: Releasing IP address(es) ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.477 [INFO][4950] utils.go 188: Calico CNI releasing IP address ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.540 [INFO][4968] ipam_plugin.go 411: Releasing address using handleID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.540 [INFO][4968] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.541 [INFO][4968] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.560 [WARNING][4968] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.560 [INFO][4968] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.563 [INFO][4968] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:44.573491 containerd[1977]: 2024-07-02 00:17:44.567 [INFO][4950] k8s.go 621: Teardown processing complete. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:44.581520 containerd[1977]: time="2024-07-02T00:17:44.577015130Z" level=info msg="TearDown network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" successfully" Jul 2 00:17:44.581520 containerd[1977]: time="2024-07-02T00:17:44.577057178Z" level=info msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" returns successfully" Jul 2 00:17:44.581520 containerd[1977]: time="2024-07-02T00:17:44.577914907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cb76f7f7-v94d6,Uid:951ede4b-50c9-48d8-8b1e-92e6c2b137f6,Namespace:calico-system,Attempt:1,}" Jul 2 00:17:44.584273 systemd[1]: run-netns-cni\x2db3370a4f\x2d66cc\x2d82e5\x2d6803\x2d99b53d0e1d29.mount: Deactivated successfully. Jul 2 00:17:44.891562 (udev-worker)[4919]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:17:44.896658 systemd-networkd[1810]: calie8d5066b930: Link UP Jul 2 00:17:44.902108 systemd-networkd[1810]: calie8d5066b930: Gained carrier Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.690 [INFO][4982] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0 calico-kube-controllers-55cb76f7f7- calico-system 951ede4b-50c9-48d8-8b1e-92e6c2b137f6 784 0 2024-07-02 00:17:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55cb76f7f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-160 calico-kube-controllers-55cb76f7f7-v94d6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8d5066b930 [] []}} ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.690 [INFO][4982] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.753 [INFO][4993] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" HandleID="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.788 [INFO][4993] ipam_plugin.go 264: Auto assigning IP ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" HandleID="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f0050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-160", "pod":"calico-kube-controllers-55cb76f7f7-v94d6", "timestamp":"2024-07-02 00:17:44.75320828 +0000 UTC"}, Hostname:"ip-172-31-23-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.788 [INFO][4993] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.788 [INFO][4993] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.788 [INFO][4993] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-160' Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.798 [INFO][4993] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.824 [INFO][4993] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.837 [INFO][4993] ipam.go 489: Trying affinity for 192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.840 [INFO][4993] ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.848 [INFO][4993] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.848 [INFO][4993] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.852 [INFO][4993] ipam.go 1685: Creating new handle: k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0 Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.871 [INFO][4993] ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.883 [INFO][4993] ipam.go 1216: Successfully claimed IPs: [192.168.121.3/26] block=192.168.121.0/26 handle="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.883 [INFO][4993] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.3/26] handle="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" host="ip-172-31-23-160" Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.883 [INFO][4993] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:44.942483 containerd[1977]: 2024-07-02 00:17:44.883 [INFO][4993] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.121.3/26] IPv6=[] ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" HandleID="k8s-pod-network.1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.887 [INFO][4982] k8s.go 386: Populated endpoint ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0", GenerateName:"calico-kube-controllers-55cb76f7f7-", Namespace:"calico-system", SelfLink:"", UID:"951ede4b-50c9-48d8-8b1e-92e6c2b137f6", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cb76f7f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"", Pod:"calico-kube-controllers-55cb76f7f7-v94d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d5066b930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.888 [INFO][4982] k8s.go 387: Calico CNI using IPs: [192.168.121.3/32] ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.888 [INFO][4982] dataplane_linux.go 68: Setting the host side veth name to calie8d5066b930 ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.903 [INFO][4982] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.904 [INFO][4982] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0", GenerateName:"calico-kube-controllers-55cb76f7f7-", Namespace:"calico-system", SelfLink:"", UID:"951ede4b-50c9-48d8-8b1e-92e6c2b137f6", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cb76f7f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0", Pod:"calico-kube-controllers-55cb76f7f7-v94d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d5066b930", MAC:"92:46:32:0d:c0:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:44.947327 containerd[1977]: 2024-07-02 00:17:44.938 [INFO][4982] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0" Namespace="calico-system" Pod="calico-kube-controllers-55cb76f7f7-v94d6" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:44.994256 containerd[1977]: time="2024-07-02T00:17:44.993371953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:44.994256 containerd[1977]: time="2024-07-02T00:17:44.993469976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:44.994256 containerd[1977]: time="2024-07-02T00:17:44.993503081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:44.994256 containerd[1977]: time="2024-07-02T00:17:44.993525584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:45.042397 systemd[1]: Started cri-containerd-1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0.scope - libcontainer container 1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0. Jul 2 00:17:45.176627 containerd[1977]: time="2024-07-02T00:17:45.174690770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cb76f7f7-v94d6,Uid:951ede4b-50c9-48d8-8b1e-92e6c2b137f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0\"" Jul 2 00:17:45.192573 containerd[1977]: time="2024-07-02T00:17:45.192113925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:17:45.271052 containerd[1977]: time="2024-07-02T00:17:45.270174352Z" level=info msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.398 [INFO][5073] k8s.go 608: Cleaning up netns ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.398 [INFO][5073] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" iface="eth0" netns="/var/run/netns/cni-39397e55-81e8-53f7-59cc-a0d0812136da" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.398 [INFO][5073] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" iface="eth0" netns="/var/run/netns/cni-39397e55-81e8-53f7-59cc-a0d0812136da" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.399 [INFO][5073] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" iface="eth0" netns="/var/run/netns/cni-39397e55-81e8-53f7-59cc-a0d0812136da" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.399 [INFO][5073] k8s.go 615: Releasing IP address(es) ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.400 [INFO][5073] utils.go 188: Calico CNI releasing IP address ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.446 [INFO][5079] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.446 [INFO][5079] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.446 [INFO][5079] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.453 [WARNING][5079] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.453 [INFO][5079] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.456 [INFO][5079] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:45.460829 containerd[1977]: 2024-07-02 00:17:45.458 [INFO][5073] k8s.go 621: Teardown processing complete. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:45.463857 containerd[1977]: time="2024-07-02T00:17:45.460913981Z" level=info msg="TearDown network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" successfully" Jul 2 00:17:45.463857 containerd[1977]: time="2024-07-02T00:17:45.460946163Z" level=info msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" returns successfully" Jul 2 00:17:45.463857 containerd[1977]: time="2024-07-02T00:17:45.463614693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjx52,Uid:9b7f5e58-3c44-472f-85f0-c915eb069355,Namespace:calico-system,Attempt:1,}" Jul 2 00:17:45.504536 systemd-networkd[1810]: vxlan.calico: Gained IPv6LL Jul 2 00:17:45.589536 systemd[1]: run-netns-cni\x2d39397e55\x2d81e8\x2d53f7\x2d59cc\x2da0d0812136da.mount: Deactivated successfully. Jul 2 00:17:45.780739 systemd-networkd[1810]: cali21938ae4919: Link UP Jul 2 00:17:45.783205 systemd-networkd[1810]: cali21938ae4919: Gained carrier Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.609 [INFO][5085] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0 csi-node-driver- calico-system 9b7f5e58-3c44-472f-85f0-c915eb069355 797 0 2024-07-02 00:17:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-23-160 csi-node-driver-rjx52 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali21938ae4919 [] []}} ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.610 [INFO][5085] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.660 [INFO][5098] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" HandleID="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.677 [INFO][5098] ipam_plugin.go 264: Auto assigning IP ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" HandleID="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-160", "pod":"csi-node-driver-rjx52", "timestamp":"2024-07-02 00:17:45.660125337 +0000 UTC"}, Hostname:"ip-172-31-23-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.677 [INFO][5098] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.677 [INFO][5098] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.677 [INFO][5098] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-160' Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.680 [INFO][5098] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.685 [INFO][5098] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.693 [INFO][5098] ipam.go 489: Trying affinity for 192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.695 [INFO][5098] ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.698 [INFO][5098] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.698 [INFO][5098] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.701 [INFO][5098] ipam.go 1685: Creating new handle: k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.713 [INFO][5098] ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.735 [INFO][5098] ipam.go 1216: Successfully claimed IPs: [192.168.121.4/26] block=192.168.121.0/26 handle="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.736 [INFO][5098] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.4/26] handle="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" host="ip-172-31-23-160" Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.736 [INFO][5098] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:45.843068 containerd[1977]: 2024-07-02 00:17:45.736 [INFO][5098] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.121.4/26] IPv6=[] ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" HandleID="k8s-pod-network.d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.760 [INFO][5085] k8s.go 386: Populated endpoint ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7f5e58-3c44-472f-85f0-c915eb069355", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"", Pod:"csi-node-driver-rjx52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali21938ae4919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.761 [INFO][5085] k8s.go 387: Calico CNI using IPs: [192.168.121.4/32] ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.762 [INFO][5085] dataplane_linux.go 68: Setting the host side veth name to cali21938ae4919 ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.779 [INFO][5085] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.785 [INFO][5085] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7f5e58-3c44-472f-85f0-c915eb069355", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab", Pod:"csi-node-driver-rjx52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali21938ae4919", MAC:"96:0f:8f:1f:c6:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:45.846174 containerd[1977]: 2024-07-02 00:17:45.830 [INFO][5085] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab" Namespace="calico-system" Pod="csi-node-driver-rjx52" WorkloadEndpoint="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:45.976747 containerd[1977]: time="2024-07-02T00:17:45.967885074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:45.976747 containerd[1977]: time="2024-07-02T00:17:45.967986255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:45.976747 containerd[1977]: time="2024-07-02T00:17:45.968013279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:45.976747 containerd[1977]: time="2024-07-02T00:17:45.968034913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:46.076466 systemd[1]: run-containerd-runc-k8s.io-d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab-runc.oSBvfM.mount: Deactivated successfully. Jul 2 00:17:46.089555 systemd[1]: Started cri-containerd-d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab.scope - libcontainer container d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab. Jul 2 00:17:46.164612 containerd[1977]: time="2024-07-02T00:17:46.164559436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjx52,Uid:9b7f5e58-3c44-472f-85f0-c915eb069355,Namespace:calico-system,Attempt:1,} returns sandbox id \"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab\"" Jul 2 00:17:46.649712 systemd-networkd[1810]: calie8d5066b930: Gained IPv6LL Jul 2 00:17:47.225781 systemd-networkd[1810]: cali21938ae4919: Gained IPv6LL Jul 2 00:17:47.373353 containerd[1977]: time="2024-07-02T00:17:47.373085332Z" level=info msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.546 [WARNING][5171] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1b2163-9933-4743-b885-dee3d2f16bf0", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb", Pod:"coredns-7db6d8ff4d-jfdk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d18caf649d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.547 [INFO][5171] k8s.go 608: Cleaning up netns ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.547 [INFO][5171] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" iface="eth0" netns="" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.547 [INFO][5171] k8s.go 615: Releasing IP address(es) ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.547 [INFO][5171] utils.go 188: Calico CNI releasing IP address ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.635 [INFO][5178] ipam_plugin.go 411: Releasing address using handleID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.635 [INFO][5178] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.636 [INFO][5178] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.650 [WARNING][5178] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.651 [INFO][5178] ipam_plugin.go 439: Releasing address using workloadID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.653 [INFO][5178] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:47.661755 containerd[1977]: 2024-07-02 00:17:47.656 [INFO][5171] k8s.go 621: Teardown processing complete. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.663152 containerd[1977]: time="2024-07-02T00:17:47.662683167Z" level=info msg="TearDown network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" successfully" Jul 2 00:17:47.663152 containerd[1977]: time="2024-07-02T00:17:47.662806269Z" level=info msg="StopPodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" returns successfully" Jul 2 00:17:47.665164 containerd[1977]: time="2024-07-02T00:17:47.664688991Z" level=info msg="RemovePodSandbox for \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" Jul 2 00:17:47.665164 containerd[1977]: time="2024-07-02T00:17:47.664735827Z" level=info msg="Forcibly stopping sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\"" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.738 [WARNING][5196] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cf1b2163-9933-4743-b885-dee3d2f16bf0", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"deaa1880bdf9c8c0a63df8a4472a1404a9be6edb1cb261fab5640dfd87f95adb", Pod:"coredns-7db6d8ff4d-jfdk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d18caf649d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.739 [INFO][5196] k8s.go 608: Cleaning up netns ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.740 [INFO][5196] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" iface="eth0" netns="" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.740 [INFO][5196] k8s.go 615: Releasing IP address(es) ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.740 [INFO][5196] utils.go 188: Calico CNI releasing IP address ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.776 [INFO][5202] ipam_plugin.go 411: Releasing address using handleID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.776 [INFO][5202] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.776 [INFO][5202] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.785 [WARNING][5202] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.785 [INFO][5202] ipam_plugin.go 439: Releasing address using workloadID ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" HandleID="k8s-pod-network.38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jfdk2-eth0" Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.788 [INFO][5202] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:47.799096 containerd[1977]: 2024-07-02 00:17:47.794 [INFO][5196] k8s.go 621: Teardown processing complete. ContainerID="38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b" Jul 2 00:17:47.800363 containerd[1977]: time="2024-07-02T00:17:47.799149094Z" level=info msg="TearDown network for sandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" successfully" Jul 2 00:17:47.807808 containerd[1977]: time="2024-07-02T00:17:47.807751933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:17:47.808011 containerd[1977]: time="2024-07-02T00:17:47.807850223Z" level=info msg="RemovePodSandbox \"38721712ebb0b339560724c49f7b5f3a9c2ebae398d7be731dc61a0e76529b6b\" returns successfully" Jul 2 00:17:47.808721 containerd[1977]: time="2024-07-02T00:17:47.808638348Z" level=info msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:47.948 [WARNING][5224] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7f5e58-3c44-472f-85f0-c915eb069355", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab", Pod:"csi-node-driver-rjx52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali21938ae4919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:47.949 [INFO][5224] k8s.go 608: Cleaning up netns ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:47.949 [INFO][5224] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" iface="eth0" netns="" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:47.949 [INFO][5224] k8s.go 615: Releasing IP address(es) ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:47.949 [INFO][5224] utils.go 188: Calico CNI releasing IP address ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.014 [INFO][5230] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.015 [INFO][5230] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.015 [INFO][5230] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.025 [WARNING][5230] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.025 [INFO][5230] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.028 [INFO][5230] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:48.039532 containerd[1977]: 2024-07-02 00:17:48.034 [INFO][5224] k8s.go 621: Teardown processing complete. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.039532 containerd[1977]: time="2024-07-02T00:17:48.038158941Z" level=info msg="TearDown network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" successfully" Jul 2 00:17:48.039532 containerd[1977]: time="2024-07-02T00:17:48.038191289Z" level=info msg="StopPodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" returns successfully" Jul 2 00:17:48.039532 containerd[1977]: time="2024-07-02T00:17:48.039139604Z" level=info msg="RemovePodSandbox for \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" Jul 2 00:17:48.039532 containerd[1977]: time="2024-07-02T00:17:48.039178422Z" level=info msg="Forcibly stopping sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\"" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.137 [WARNING][5250] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7f5e58-3c44-472f-85f0-c915eb069355", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab", Pod:"csi-node-driver-rjx52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali21938ae4919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.138 [INFO][5250] k8s.go 608: Cleaning up netns ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.138 [INFO][5250] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" iface="eth0" netns="" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.138 [INFO][5250] k8s.go 615: Releasing IP address(es) ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.138 [INFO][5250] utils.go 188: Calico CNI releasing IP address ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.257 [INFO][5256] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.257 [INFO][5256] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.257 [INFO][5256] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.281 [WARNING][5256] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.281 [INFO][5256] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" HandleID="k8s-pod-network.7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Workload="ip--172--31--23--160-k8s-csi--node--driver--rjx52-eth0" Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.286 [INFO][5256] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:48.293008 containerd[1977]: 2024-07-02 00:17:48.288 [INFO][5250] k8s.go 621: Teardown processing complete. ContainerID="7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861" Jul 2 00:17:48.293008 containerd[1977]: time="2024-07-02T00:17:48.292905665Z" level=info msg="TearDown network for sandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" successfully" Jul 2 00:17:48.300000 containerd[1977]: time="2024-07-02T00:17:48.299905072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:17:48.300000 containerd[1977]: time="2024-07-02T00:17:48.299994760Z" level=info msg="RemovePodSandbox \"7a808557d033530a86c0b3bd1e9e3da1063d9c045d4dc631973d638390427861\" returns successfully" Jul 2 00:17:48.301117 containerd[1977]: time="2024-07-02T00:17:48.301074255Z" level=info msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.523 [WARNING][5275] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0", GenerateName:"calico-kube-controllers-55cb76f7f7-", Namespace:"calico-system", SelfLink:"", UID:"951ede4b-50c9-48d8-8b1e-92e6c2b137f6", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cb76f7f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0", Pod:"calico-kube-controllers-55cb76f7f7-v94d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d5066b930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.524 [INFO][5275] k8s.go 608: Cleaning up netns ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.524 [INFO][5275] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" iface="eth0" netns="" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.525 [INFO][5275] k8s.go 615: Releasing IP address(es) ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.525 [INFO][5275] utils.go 188: Calico CNI releasing IP address ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.585 [INFO][5282] ipam_plugin.go 411: Releasing address using handleID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.585 [INFO][5282] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.586 [INFO][5282] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.599 [WARNING][5282] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.599 [INFO][5282] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.602 [INFO][5282] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:48.610850 containerd[1977]: 2024-07-02 00:17:48.606 [INFO][5275] k8s.go 621: Teardown processing complete. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.613131 containerd[1977]: time="2024-07-02T00:17:48.610953529Z" level=info msg="TearDown network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" successfully" Jul 2 00:17:48.613131 containerd[1977]: time="2024-07-02T00:17:48.610992111Z" level=info msg="StopPodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" returns successfully" Jul 2 00:17:48.613131 containerd[1977]: time="2024-07-02T00:17:48.613021181Z" level=info msg="RemovePodSandbox for \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" Jul 2 00:17:48.613131 containerd[1977]: time="2024-07-02T00:17:48.613060635Z" level=info msg="Forcibly stopping sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\"" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.689 [WARNING][5303] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0", GenerateName:"calico-kube-controllers-55cb76f7f7-", Namespace:"calico-system", SelfLink:"", UID:"951ede4b-50c9-48d8-8b1e-92e6c2b137f6", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cb76f7f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0", Pod:"calico-kube-controllers-55cb76f7f7-v94d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d5066b930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.690 [INFO][5303] k8s.go 608: Cleaning up netns ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.690 [INFO][5303] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" iface="eth0" netns="" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.690 [INFO][5303] k8s.go 615: Releasing IP address(es) ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.690 [INFO][5303] utils.go 188: Calico CNI releasing IP address ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.740 [INFO][5309] ipam_plugin.go 411: Releasing address using handleID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.740 [INFO][5309] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.740 [INFO][5309] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.751 [WARNING][5309] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.751 [INFO][5309] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" HandleID="k8s-pod-network.53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Workload="ip--172--31--23--160-k8s-calico--kube--controllers--55cb76f7f7--v94d6-eth0" Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.754 [INFO][5309] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:48.761445 containerd[1977]: 2024-07-02 00:17:48.758 [INFO][5303] k8s.go 621: Teardown processing complete. ContainerID="53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8" Jul 2 00:17:48.762726 containerd[1977]: time="2024-07-02T00:17:48.761486429Z" level=info msg="TearDown network for sandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" successfully" Jul 2 00:17:48.782530 containerd[1977]: time="2024-07-02T00:17:48.782479066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:17:48.782684 containerd[1977]: time="2024-07-02T00:17:48.782566063Z" level=info msg="RemovePodSandbox \"53d02559ec2e900643d5ec32a65a50d497f49f841caa4723144041e69e6d4bb8\" returns successfully" Jul 2 00:17:48.783777 containerd[1977]: time="2024-07-02T00:17:48.783458020Z" level=info msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.855 [WARNING][5328] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"545bdab3-f4e9-4d98-8606-4a0a243e8137", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061", Pod:"coredns-7db6d8ff4d-jds8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic40f28b2b2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.856 [INFO][5328] k8s.go 608: Cleaning up netns ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.856 [INFO][5328] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" iface="eth0" netns="" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.857 [INFO][5328] k8s.go 615: Releasing IP address(es) ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.857 [INFO][5328] utils.go 188: Calico CNI releasing IP address ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.911 [INFO][5334] ipam_plugin.go 411: Releasing address using handleID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.911 [INFO][5334] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.911 [INFO][5334] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.927 [WARNING][5334] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.927 [INFO][5334] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.930 [INFO][5334] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:48.938427 containerd[1977]: 2024-07-02 00:17:48.934 [INFO][5328] k8s.go 621: Teardown processing complete. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:48.939111 containerd[1977]: time="2024-07-02T00:17:48.938475455Z" level=info msg="TearDown network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" successfully" Jul 2 00:17:48.939111 containerd[1977]: time="2024-07-02T00:17:48.938510750Z" level=info msg="StopPodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" returns successfully" Jul 2 00:17:48.940385 containerd[1977]: time="2024-07-02T00:17:48.939805391Z" level=info msg="RemovePodSandbox for \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" Jul 2 00:17:48.940385 containerd[1977]: time="2024-07-02T00:17:48.939844066Z" level=info msg="Forcibly stopping sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\"" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.023 [WARNING][5352] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"545bdab3-f4e9-4d98-8606-4a0a243e8137", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"a5157fa7672522aeab19ac4cd7ab7bdc28af3745772b2e13c8d11bb2898a1061", Pod:"coredns-7db6d8ff4d-jds8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic40f28b2b2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.024 [INFO][5352] k8s.go 608: Cleaning up netns ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.024 [INFO][5352] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" iface="eth0" netns="" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.024 [INFO][5352] k8s.go 615: Releasing IP address(es) ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.024 [INFO][5352] utils.go 188: Calico CNI releasing IP address ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.087 [INFO][5358] ipam_plugin.go 411: Releasing address using handleID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.087 [INFO][5358] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.088 [INFO][5358] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.100 [WARNING][5358] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.100 [INFO][5358] ipam_plugin.go 439: Releasing address using workloadID ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" HandleID="k8s-pod-network.abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Workload="ip--172--31--23--160-k8s-coredns--7db6d8ff4d--jds8w-eth0" Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.123 [INFO][5358] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:17:49.150215 containerd[1977]: 2024-07-02 00:17:49.140 [INFO][5352] k8s.go 621: Teardown processing complete. ContainerID="abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63" Jul 2 00:17:49.151110 containerd[1977]: time="2024-07-02T00:17:49.150596806Z" level=info msg="TearDown network for sandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" successfully" Jul 2 00:17:49.155760 containerd[1977]: time="2024-07-02T00:17:49.155535360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:17:49.155760 containerd[1977]: time="2024-07-02T00:17:49.155613744Z" level=info msg="RemovePodSandbox \"abf2fa0e109e992cd8f6898790544565e44e061ab0a697c0a911683a50b2cd63\" returns successfully" Jul 2 00:17:49.277680 ntpd[1941]: Listen normally on 7 vxlan.calico 192.168.121.0:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 7 vxlan.calico 192.168.121.0:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 8 calic40f28b2b2d [fe80::ecee:eeff:feee:eeee%4]:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 9 cali4d18caf649d [fe80::ecee:eeff:feee:eeee%5]:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 10 vxlan.calico [fe80::6419:e8ff:fe1b:3dbb%6]:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 11 calie8d5066b930 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:17:49.283287 ntpd[1941]: 2 Jul 00:17:49 ntpd[1941]: Listen normally on 12 cali21938ae4919 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:17:49.277776 ntpd[1941]: Listen normally on 8 calic40f28b2b2d [fe80::ecee:eeff:feee:eeee%4]:123 Jul 2 00:17:49.277833 ntpd[1941]: Listen normally on 9 cali4d18caf649d [fe80::ecee:eeff:feee:eeee%5]:123 Jul 2 00:17:49.277877 ntpd[1941]: Listen normally on 10 vxlan.calico [fe80::6419:e8ff:fe1b:3dbb%6]:123 Jul 2 00:17:49.277917 ntpd[1941]: Listen normally on 11 calie8d5066b930 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:17:49.277957 ntpd[1941]: Listen normally on 12 cali21938ae4919 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:17:49.309904 systemd[1]: Started sshd@9-172.31.23.160:22-147.75.109.163:42684.service - OpenSSH per-connection server daemon (147.75.109.163:42684). Jul 2 00:17:49.385138 containerd[1977]: time="2024-07-02T00:17:49.384431793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:49.386199 containerd[1977]: time="2024-07-02T00:17:49.386107690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:17:49.387896 containerd[1977]: time="2024-07-02T00:17:49.387834168Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:49.391142 containerd[1977]: time="2024-07-02T00:17:49.391057733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:49.392621 containerd[1977]: time="2024-07-02T00:17:49.392041561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.199783084s" Jul 2 00:17:49.392621 containerd[1977]: time="2024-07-02T00:17:49.392088371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:17:49.394114 containerd[1977]: time="2024-07-02T00:17:49.393678951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:17:49.416488 containerd[1977]: time="2024-07-02T00:17:49.416443604Z" level=info msg="CreateContainer within sandbox \"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:17:49.456514 containerd[1977]: time="2024-07-02T00:17:49.456389441Z" level=info msg="CreateContainer within sandbox \"1f3aedc8dfa120ebdef56d4dc5e4963c6d74eeb9fd712807a29399ebc30d60e0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979\"" Jul 2 00:17:49.457274 containerd[1977]: time="2024-07-02T00:17:49.457193222Z" level=info msg="StartContainer for \"8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979\"" Jul 2 00:17:49.537529 systemd[1]: Started cri-containerd-8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979.scope - libcontainer container 8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979. Jul 2 00:17:49.584603 sshd[5365]: Accepted publickey for core from 147.75.109.163 port 42684 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:17:49.594558 sshd[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:49.610950 systemd-logind[1947]: New session 10 of user core. Jul 2 00:17:49.627468 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:17:49.728927 containerd[1977]: time="2024-07-02T00:17:49.727196185Z" level=info msg="StartContainer for \"8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979\" returns successfully" Jul 2 00:17:50.268393 kubelet[3407]: I0702 00:17:50.267475 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55cb76f7f7-v94d6" podStartSLOduration=37.05614629 podStartE2EDuration="41.267450204s" podCreationTimestamp="2024-07-02 00:17:09 +0000 UTC" firstStartedPulling="2024-07-02 00:17:45.182208136 +0000 UTC m=+58.150594134" lastFinishedPulling="2024-07-02 00:17:49.393512036 +0000 UTC m=+62.361898048" observedRunningTime="2024-07-02 00:17:50.068976327 +0000 UTC m=+63.037362347" watchObservedRunningTime="2024-07-02 00:17:50.267450204 +0000 UTC m=+63.235836223" Jul 2 00:17:50.297670 sshd[5365]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:50.303909 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:17:50.305477 systemd[1]: sshd@9-172.31.23.160:22-147.75.109.163:42684.service: Deactivated successfully. Jul 2 00:17:50.310770 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:17:50.314767 systemd-logind[1947]: Removed session 10. Jul 2 00:17:51.365389 containerd[1977]: time="2024-07-02T00:17:51.365319897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.366695 containerd[1977]: time="2024-07-02T00:17:51.366544506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:17:51.369305 containerd[1977]: time="2024-07-02T00:17:51.368297090Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.371722 containerd[1977]: time="2024-07-02T00:17:51.371629971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:51.372517 containerd[1977]: time="2024-07-02T00:17:51.372479891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.978761758s" Jul 2 00:17:51.372671 containerd[1977]: time="2024-07-02T00:17:51.372650551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:17:51.375979 containerd[1977]: time="2024-07-02T00:17:51.375940424Z" level=info msg="CreateContainer within sandbox \"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:17:51.408359 containerd[1977]: time="2024-07-02T00:17:51.408309388Z" level=info msg="CreateContainer within sandbox \"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441\"" Jul 2 00:17:51.409409 containerd[1977]: time="2024-07-02T00:17:51.409241178Z" level=info msg="StartContainer for \"da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441\"" Jul 2 00:17:51.453696 systemd[1]: run-containerd-runc-k8s.io-da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441-runc.xCDVkF.mount: Deactivated successfully. Jul 2 00:17:51.461469 systemd[1]: Started cri-containerd-da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441.scope - libcontainer container da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441. Jul 2 00:17:51.537962 containerd[1977]: time="2024-07-02T00:17:51.537909893Z" level=info msg="StartContainer for \"da7df526af291636b723c5061d0752f0f671864a10c46a7282816f4889231441\" returns successfully" Jul 2 00:17:51.539311 containerd[1977]: time="2024-07-02T00:17:51.539278635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:17:53.489898 containerd[1977]: time="2024-07-02T00:17:53.488465671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:53.491870 containerd[1977]: time="2024-07-02T00:17:53.491813383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:17:53.494345 containerd[1977]: time="2024-07-02T00:17:53.494287484Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:53.502053 containerd[1977]: time="2024-07-02T00:17:53.501876084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:53.505086 containerd[1977]: time="2024-07-02T00:17:53.505036287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.965709889s" Jul 2 00:17:53.505221 containerd[1977]: time="2024-07-02T00:17:53.505086586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:17:53.509983 containerd[1977]: time="2024-07-02T00:17:53.509926797Z" level=info msg="CreateContainer within sandbox \"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:17:53.588455 containerd[1977]: time="2024-07-02T00:17:53.586898473Z" level=info msg="CreateContainer within sandbox \"d22cbd97948af6a61033db885cfa63cfe9f2718d1bcd83626335e1505bb12cab\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1\"" Jul 2 00:17:53.589226 containerd[1977]: time="2024-07-02T00:17:53.589191431Z" level=info msg="StartContainer for \"4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1\"" Jul 2 00:17:53.593189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779055753.mount: Deactivated successfully. Jul 2 00:17:53.721729 systemd[1]: run-containerd-runc-k8s.io-4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1-runc.kdBDfx.mount: Deactivated successfully. Jul 2 00:17:53.740041 systemd[1]: Started cri-containerd-4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1.scope - libcontainer container 4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1. Jul 2 00:17:53.796372 containerd[1977]: time="2024-07-02T00:17:53.796289847Z" level=info msg="StartContainer for \"4be3765cb2ccce7d1ecbf19e33a8f297ce8691994d9db08f651911700758bfa1\" returns successfully" Jul 2 00:17:54.623316 kubelet[3407]: I0702 00:17:54.623271 3407 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:17:54.623316 kubelet[3407]: I0702 00:17:54.623330 3407 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:17:55.339781 systemd[1]: Started sshd@10-172.31.23.160:22-147.75.109.163:56538.service - OpenSSH per-connection server daemon (147.75.109.163:56538). Jul 2 00:17:55.550486 sshd[5535]: Accepted publickey for core from 147.75.109.163 port 56538 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:17:55.574333 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:55.605397 systemd-logind[1947]: New session 11 of user core. Jul 2 00:17:55.616483 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:17:56.093901 sshd[5535]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:56.099628 systemd[1]: sshd@10-172.31.23.160:22-147.75.109.163:56538.service: Deactivated successfully. Jul 2 00:17:56.103352 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:17:56.105420 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:17:56.108879 systemd-logind[1947]: Removed session 11. Jul 2 00:18:00.247415 systemd[1]: run-containerd-runc-k8s.io-8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979-runc.NB3mlb.mount: Deactivated successfully. Jul 2 00:18:01.156586 systemd[1]: Started sshd@11-172.31.23.160:22-147.75.109.163:56550.service - OpenSSH per-connection server daemon (147.75.109.163:56550). Jul 2 00:18:01.206935 kubelet[3407]: I0702 00:18:01.206663 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rjx52" podStartSLOduration=44.868833407 podStartE2EDuration="52.206297289s" podCreationTimestamp="2024-07-02 00:17:09 +0000 UTC" firstStartedPulling="2024-07-02 00:17:46.169461797 +0000 UTC m=+59.137847795" lastFinishedPulling="2024-07-02 00:17:53.506925666 +0000 UTC m=+66.475311677" observedRunningTime="2024-07-02 00:17:54.100546571 +0000 UTC m=+67.068932591" watchObservedRunningTime="2024-07-02 00:18:01.206297289 +0000 UTC m=+74.174683309" Jul 2 00:18:01.380594 sshd[5612]: Accepted publickey for core from 147.75.109.163 port 56550 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:01.386642 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:01.402947 systemd-logind[1947]: New session 12 of user core. Jul 2 00:18:01.410518 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:18:01.826885 sshd[5612]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:01.840059 systemd[1]: sshd@11-172.31.23.160:22-147.75.109.163:56550.service: Deactivated successfully. Jul 2 00:18:01.875757 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:18:01.920395 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:18:01.932082 systemd[1]: Started sshd@12-172.31.23.160:22-147.75.109.163:56566.service - OpenSSH per-connection server daemon (147.75.109.163:56566). Jul 2 00:18:01.950384 systemd-logind[1947]: Removed session 12. Jul 2 00:18:02.222876 sshd[5625]: Accepted publickey for core from 147.75.109.163 port 56566 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:02.226824 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:02.280751 systemd-logind[1947]: New session 13 of user core. Jul 2 00:18:02.305580 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:18:02.918470 sshd[5625]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:02.932559 systemd[1]: sshd@12-172.31.23.160:22-147.75.109.163:56566.service: Deactivated successfully. Jul 2 00:18:02.935509 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:18:02.939204 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:18:02.948465 systemd[1]: Started sshd@13-172.31.23.160:22-147.75.109.163:34080.service - OpenSSH per-connection server daemon (147.75.109.163:34080). Jul 2 00:18:02.954117 systemd-logind[1947]: Removed session 13. Jul 2 00:18:03.147576 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 34080 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:03.167573 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:03.183365 systemd-logind[1947]: New session 14 of user core. Jul 2 00:18:03.190913 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:18:03.608081 sshd[5636]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:03.615590 systemd[1]: sshd@13-172.31.23.160:22-147.75.109.163:34080.service: Deactivated successfully. Jul 2 00:18:03.618910 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:18:03.620148 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:18:03.621927 systemd-logind[1947]: Removed session 14. Jul 2 00:18:08.646006 systemd[1]: Started sshd@14-172.31.23.160:22-147.75.109.163:34088.service - OpenSSH per-connection server daemon (147.75.109.163:34088). Jul 2 00:18:08.840482 sshd[5665]: Accepted publickey for core from 147.75.109.163 port 34088 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:08.842170 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:08.847458 systemd-logind[1947]: New session 15 of user core. Jul 2 00:18:08.852687 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:18:09.197719 sshd[5665]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:09.205805 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:18:09.206849 systemd[1]: sshd@14-172.31.23.160:22-147.75.109.163:34088.service: Deactivated successfully. Jul 2 00:18:09.209430 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:18:09.210688 systemd-logind[1947]: Removed session 15. Jul 2 00:18:14.236650 systemd[1]: Started sshd@15-172.31.23.160:22-147.75.109.163:59594.service - OpenSSH per-connection server daemon (147.75.109.163:59594). Jul 2 00:18:14.449807 sshd[5682]: Accepted publickey for core from 147.75.109.163 port 59594 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:14.452336 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:14.458913 systemd-logind[1947]: New session 16 of user core. Jul 2 00:18:14.467492 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:18:14.938535 sshd[5682]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:14.955432 systemd[1]: sshd@15-172.31.23.160:22-147.75.109.163:59594.service: Deactivated successfully. Jul 2 00:18:14.961536 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:18:14.963867 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:18:14.966443 systemd-logind[1947]: Removed session 16. Jul 2 00:18:19.978544 systemd[1]: Started sshd@16-172.31.23.160:22-147.75.109.163:59602.service - OpenSSH per-connection server daemon (147.75.109.163:59602). Jul 2 00:18:20.172258 sshd[5700]: Accepted publickey for core from 147.75.109.163 port 59602 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:20.190380 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:20.210000 systemd-logind[1947]: New session 17 of user core. Jul 2 00:18:20.222573 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:18:20.609646 sshd[5700]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:20.622133 systemd[1]: sshd@16-172.31.23.160:22-147.75.109.163:59602.service: Deactivated successfully. Jul 2 00:18:20.628999 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:18:20.634308 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:18:20.639888 systemd-logind[1947]: Removed session 17. Jul 2 00:18:25.649651 systemd[1]: Started sshd@17-172.31.23.160:22-147.75.109.163:50022.service - OpenSSH per-connection server daemon (147.75.109.163:50022). Jul 2 00:18:25.817291 sshd[5719]: Accepted publickey for core from 147.75.109.163 port 50022 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:25.819028 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:25.824413 systemd-logind[1947]: New session 18 of user core. Jul 2 00:18:25.830474 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:18:26.047799 sshd[5719]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:26.053651 systemd[1]: sshd@17-172.31.23.160:22-147.75.109.163:50022.service: Deactivated successfully. Jul 2 00:18:26.056536 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:18:26.058510 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:18:26.062003 systemd-logind[1947]: Removed session 18. Jul 2 00:18:31.097345 systemd[1]: Started sshd@18-172.31.23.160:22-147.75.109.163:50026.service - OpenSSH per-connection server daemon (147.75.109.163:50026). Jul 2 00:18:31.266773 sshd[5781]: Accepted publickey for core from 147.75.109.163 port 50026 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:31.269553 sshd[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:31.277785 systemd-logind[1947]: New session 19 of user core. Jul 2 00:18:31.281688 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:18:31.540477 sshd[5781]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:31.546807 systemd[1]: sshd@18-172.31.23.160:22-147.75.109.163:50026.service: Deactivated successfully. Jul 2 00:18:31.552474 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:18:31.555073 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:18:31.556759 systemd-logind[1947]: Removed session 19. Jul 2 00:18:31.578955 systemd[1]: Started sshd@19-172.31.23.160:22-147.75.109.163:50030.service - OpenSSH per-connection server daemon (147.75.109.163:50030). Jul 2 00:18:31.746185 sshd[5793]: Accepted publickey for core from 147.75.109.163 port 50030 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:31.747190 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:31.756346 systemd-logind[1947]: New session 20 of user core. Jul 2 00:18:31.763671 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:18:32.529615 sshd[5793]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:32.534733 systemd[1]: sshd@19-172.31.23.160:22-147.75.109.163:50030.service: Deactivated successfully. Jul 2 00:18:32.539017 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:18:32.540351 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:18:32.542986 systemd-logind[1947]: Removed session 20. Jul 2 00:18:32.562320 systemd[1]: Started sshd@20-172.31.23.160:22-147.75.109.163:41746.service - OpenSSH per-connection server daemon (147.75.109.163:41746). Jul 2 00:18:32.810193 sshd[5805]: Accepted publickey for core from 147.75.109.163 port 41746 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:32.813799 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:32.829614 systemd-logind[1947]: New session 21 of user core. Jul 2 00:18:32.834475 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:18:33.631513 kubelet[3407]: I0702 00:18:33.631446 3407 topology_manager.go:215] "Topology Admit Handler" podUID="ed30717d-5da4-4f51-bb87-9c1c5784dfcb" podNamespace="calico-apiserver" podName="calico-apiserver-6744b7555-6jw99" Jul 2 00:18:33.711529 systemd[1]: Created slice kubepods-besteffort-poded30717d_5da4_4f51_bb87_9c1c5784dfcb.slice - libcontainer container kubepods-besteffort-poded30717d_5da4_4f51_bb87_9c1c5784dfcb.slice. Jul 2 00:18:33.739940 kubelet[3407]: I0702 00:18:33.739090 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8ft\" (UniqueName: \"kubernetes.io/projected/ed30717d-5da4-4f51-bb87-9c1c5784dfcb-kube-api-access-rd8ft\") pod \"calico-apiserver-6744b7555-6jw99\" (UID: \"ed30717d-5da4-4f51-bb87-9c1c5784dfcb\") " pod="calico-apiserver/calico-apiserver-6744b7555-6jw99" Jul 2 00:18:33.739940 kubelet[3407]: I0702 00:18:33.739190 3407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed30717d-5da4-4f51-bb87-9c1c5784dfcb-calico-apiserver-certs\") pod \"calico-apiserver-6744b7555-6jw99\" (UID: \"ed30717d-5da4-4f51-bb87-9c1c5784dfcb\") " pod="calico-apiserver/calico-apiserver-6744b7555-6jw99" Jul 2 00:18:33.875255 kubelet[3407]: E0702 00:18:33.856276 3407 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:18:33.899342 kubelet[3407]: E0702 00:18:33.899202 3407 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed30717d-5da4-4f51-bb87-9c1c5784dfcb-calico-apiserver-certs podName:ed30717d-5da4-4f51-bb87-9c1c5784dfcb nodeName:}" failed. No retries permitted until 2024-07-02 00:18:34.390438912 +0000 UTC m=+107.358824926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ed30717d-5da4-4f51-bb87-9c1c5784dfcb-calico-apiserver-certs") pod "calico-apiserver-6744b7555-6jw99" (UID: "ed30717d-5da4-4f51-bb87-9c1c5784dfcb") : secret "calico-apiserver-certs" not found Jul 2 00:18:34.640515 containerd[1977]: time="2024-07-02T00:18:34.640454929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6744b7555-6jw99,Uid:ed30717d-5da4-4f51-bb87-9c1c5784dfcb,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:18:35.445360 systemd-networkd[1810]: cali6a51f300e5d: Link UP Jul 2 00:18:35.447823 systemd-networkd[1810]: cali6a51f300e5d: Gained carrier Jul 2 00:18:35.472509 (udev-worker)[5845]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.113 [INFO][5827] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0 calico-apiserver-6744b7555- calico-apiserver ed30717d-5da4-4f51-bb87-9c1c5784dfcb 1102 0 2024-07-02 00:18:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6744b7555 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-160 calico-apiserver-6744b7555-6jw99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a51f300e5d [] []}} ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.114 [INFO][5827] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.255 [INFO][5837] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" HandleID="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Workload="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.275 [INFO][5837] ipam_plugin.go 264: Auto assigning IP ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" HandleID="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Workload="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029e390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-160", "pod":"calico-apiserver-6744b7555-6jw99", "timestamp":"2024-07-02 00:18:35.254977872 +0000 UTC"}, Hostname:"ip-172-31-23-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.275 [INFO][5837] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.275 [INFO][5837] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.275 [INFO][5837] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-160' Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.279 [INFO][5837] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.302 [INFO][5837] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.355 [INFO][5837] ipam.go 489: Trying affinity for 192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.359 [INFO][5837] ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.364 [INFO][5837] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.364 [INFO][5837] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.368 [INFO][5837] ipam.go 1685: Creating new handle: k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980 Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.385 [INFO][5837] ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.424 [INFO][5837] ipam.go 1216: Successfully claimed IPs: [192.168.121.5/26] block=192.168.121.0/26 handle="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.424 [INFO][5837] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.5/26] handle="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" host="ip-172-31-23-160" Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.424 [INFO][5837] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:18:35.522579 containerd[1977]: 2024-07-02 00:18:35.424 [INFO][5837] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.121.5/26] IPv6=[] ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" HandleID="k8s-pod-network.283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Workload="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.429 [INFO][5827] k8s.go 386: Populated endpoint ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0", GenerateName:"calico-apiserver-6744b7555-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed30717d-5da4-4f51-bb87-9c1c5784dfcb", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6744b7555", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"", Pod:"calico-apiserver-6744b7555-6jw99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a51f300e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.430 [INFO][5827] k8s.go 387: Calico CNI using IPs: [192.168.121.5/32] ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.430 [INFO][5827] dataplane_linux.go 68: Setting the host side veth name to cali6a51f300e5d ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.447 [INFO][5827] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.449 [INFO][5827] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0", GenerateName:"calico-apiserver-6744b7555-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed30717d-5da4-4f51-bb87-9c1c5784dfcb", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6744b7555", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-160", ContainerID:"283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980", Pod:"calico-apiserver-6744b7555-6jw99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a51f300e5d", MAC:"5e:86:f5:c6:b7:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:18:35.525792 containerd[1977]: 2024-07-02 00:18:35.498 [INFO][5827] k8s.go 500: Wrote updated endpoint to datastore ContainerID="283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980" Namespace="calico-apiserver" Pod="calico-apiserver-6744b7555-6jw99" WorkloadEndpoint="ip--172--31--23--160-k8s-calico--apiserver--6744b7555--6jw99-eth0" Jul 2 00:18:35.633412 containerd[1977]: time="2024-07-02T00:18:35.630177669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:35.633412 containerd[1977]: time="2024-07-02T00:18:35.631279095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:35.633412 containerd[1977]: time="2024-07-02T00:18:35.631319812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:35.633412 containerd[1977]: time="2024-07-02T00:18:35.631344134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:35.725833 systemd[1]: Started cri-containerd-283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980.scope - libcontainer container 283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980. Jul 2 00:18:36.083187 containerd[1977]: time="2024-07-02T00:18:36.083039550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6744b7555-6jw99,Uid:ed30717d-5da4-4f51-bb87-9c1c5784dfcb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980\"" Jul 2 00:18:36.127300 containerd[1977]: time="2024-07-02T00:18:36.126092840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:18:36.602380 sshd[5805]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:36.622145 systemd[1]: sshd@20-172.31.23.160:22-147.75.109.163:41746.service: Deactivated successfully. Jul 2 00:18:36.637964 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:18:36.640007 systemd-networkd[1810]: cali6a51f300e5d: Gained IPv6LL Jul 2 00:18:36.665957 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:18:36.667128 systemd[1]: Started sshd@21-172.31.23.160:22-147.75.109.163:41750.service - OpenSSH per-connection server daemon (147.75.109.163:41750). Jul 2 00:18:36.675671 systemd-logind[1947]: Removed session 21. Jul 2 00:18:36.868407 sshd[5907]: Accepted publickey for core from 147.75.109.163 port 41750 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:36.873454 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:36.890474 systemd-logind[1947]: New session 22 of user core. Jul 2 00:18:36.895525 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:18:38.369026 sshd[5907]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:38.378424 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:18:38.384280 systemd[1]: sshd@21-172.31.23.160:22-147.75.109.163:41750.service: Deactivated successfully. Jul 2 00:18:38.389869 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:18:38.410787 systemd[1]: Started sshd@22-172.31.23.160:22-147.75.109.163:41766.service - OpenSSH per-connection server daemon (147.75.109.163:41766). Jul 2 00:18:38.413508 systemd-logind[1947]: Removed session 22. Jul 2 00:18:38.683689 sshd[5922]: Accepted publickey for core from 147.75.109.163 port 41766 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:38.687161 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:38.703747 systemd-logind[1947]: New session 23 of user core. Jul 2 00:18:38.707507 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:18:39.162019 sshd[5922]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:39.192639 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:18:39.194499 systemd[1]: sshd@22-172.31.23.160:22-147.75.109.163:41766.service: Deactivated successfully. Jul 2 00:18:39.202004 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:18:39.204884 systemd-logind[1947]: Removed session 23. Jul 2 00:18:39.272820 ntpd[1941]: Listen normally on 13 cali6a51f300e5d [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:18:39.273700 ntpd[1941]: 2 Jul 00:18:39 ntpd[1941]: Listen normally on 13 cali6a51f300e5d [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:18:40.426658 containerd[1977]: time="2024-07-02T00:18:40.426537485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:40.430092 containerd[1977]: time="2024-07-02T00:18:40.430030011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:18:40.442266 containerd[1977]: time="2024-07-02T00:18:40.442077969Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:40.446052 containerd[1977]: time="2024-07-02T00:18:40.445718008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:18:40.447123 containerd[1977]: time="2024-07-02T00:18:40.446848393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.320368403s" Jul 2 00:18:40.447123 containerd[1977]: time="2024-07-02T00:18:40.446898784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:18:40.450574 containerd[1977]: time="2024-07-02T00:18:40.450432009Z" level=info msg="CreateContainer within sandbox \"283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:18:40.475715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927688413.mount: Deactivated successfully. Jul 2 00:18:40.502293 containerd[1977]: time="2024-07-02T00:18:40.502218549Z" level=info msg="CreateContainer within sandbox \"283b02680b0ac1a4ef1329fd18c423b219e29ba29f852e29af8aa515ff0ac980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e54a108c392b01e3216d0f9eab094afbf140e9b2bac66f52b9c7f0a79357e64\"" Jul 2 00:18:40.503337 containerd[1977]: time="2024-07-02T00:18:40.503252753Z" level=info msg="StartContainer for \"2e54a108c392b01e3216d0f9eab094afbf140e9b2bac66f52b9c7f0a79357e64\"" Jul 2 00:18:40.561615 systemd[1]: Started cri-containerd-2e54a108c392b01e3216d0f9eab094afbf140e9b2bac66f52b9c7f0a79357e64.scope - libcontainer container 2e54a108c392b01e3216d0f9eab094afbf140e9b2bac66f52b9c7f0a79357e64. Jul 2 00:18:40.649105 containerd[1977]: time="2024-07-02T00:18:40.649043322Z" level=info msg="StartContainer for \"2e54a108c392b01e3216d0f9eab094afbf140e9b2bac66f52b9c7f0a79357e64\" returns successfully" Jul 2 00:18:42.140093 kubelet[3407]: I0702 00:18:42.138662 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6744b7555-6jw99" podStartSLOduration=4.803747237 podStartE2EDuration="9.126918083s" podCreationTimestamp="2024-07-02 00:18:33 +0000 UTC" firstStartedPulling="2024-07-02 00:18:36.125008981 +0000 UTC m=+109.093395016" lastFinishedPulling="2024-07-02 00:18:40.448179862 +0000 UTC m=+113.416565862" observedRunningTime="2024-07-02 00:18:41.440643464 +0000 UTC m=+114.409029463" watchObservedRunningTime="2024-07-02 00:18:42.126918083 +0000 UTC m=+115.095304104" Jul 2 00:18:44.204625 systemd[1]: Started sshd@23-172.31.23.160:22-147.75.109.163:59618.service - OpenSSH per-connection server daemon (147.75.109.163:59618). Jul 2 00:18:44.488521 sshd[5988]: Accepted publickey for core from 147.75.109.163 port 59618 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:44.491521 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:44.502946 systemd-logind[1947]: New session 24 of user core. Jul 2 00:18:44.509623 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:18:45.122520 sshd[5988]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:45.131074 systemd[1]: sshd@23-172.31.23.160:22-147.75.109.163:59618.service: Deactivated successfully. Jul 2 00:18:45.138807 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:18:45.141865 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:18:45.145052 systemd-logind[1947]: Removed session 24. Jul 2 00:18:50.163757 systemd[1]: Started sshd@24-172.31.23.160:22-147.75.109.163:59630.service - OpenSSH per-connection server daemon (147.75.109.163:59630). Jul 2 00:18:50.351441 sshd[6015]: Accepted publickey for core from 147.75.109.163 port 59630 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:50.352957 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.361311 systemd-logind[1947]: New session 25 of user core. Jul 2 00:18:50.368597 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:18:50.602417 sshd[6015]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:50.608627 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:18:50.609668 systemd[1]: sshd@24-172.31.23.160:22-147.75.109.163:59630.service: Deactivated successfully. Jul 2 00:18:50.612696 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:18:50.620766 systemd-logind[1947]: Removed session 25. Jul 2 00:18:55.641120 systemd[1]: Started sshd@25-172.31.23.160:22-147.75.109.163:59236.service - OpenSSH per-connection server daemon (147.75.109.163:59236). Jul 2 00:18:55.813023 sshd[6029]: Accepted publickey for core from 147.75.109.163 port 59236 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:18:55.817583 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:55.822587 systemd-logind[1947]: New session 26 of user core. Jul 2 00:18:55.829450 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:18:56.029114 sshd[6029]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:56.045765 systemd[1]: sshd@25-172.31.23.160:22-147.75.109.163:59236.service: Deactivated successfully. Jul 2 00:18:56.059996 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:18:56.084059 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:18:56.089995 systemd-logind[1947]: Removed session 26. Jul 2 00:19:00.137355 systemd[1]: run-containerd-runc-k8s.io-8dcac6281a2e814ab270d780dfb642bf6082d3f02209b7bcba05d88320d08979-runc.3uYvOe.mount: Deactivated successfully. Jul 2 00:19:01.072190 systemd[1]: Started sshd@26-172.31.23.160:22-147.75.109.163:59250.service - OpenSSH per-connection server daemon (147.75.109.163:59250). Jul 2 00:19:01.283701 sshd[6102]: Accepted publickey for core from 147.75.109.163 port 59250 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:19:01.287144 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:01.299364 systemd-logind[1947]: New session 27 of user core. Jul 2 00:19:01.302780 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:19:01.680529 sshd[6102]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:01.690927 systemd-logind[1947]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:19:01.692336 systemd[1]: sshd@26-172.31.23.160:22-147.75.109.163:59250.service: Deactivated successfully. Jul 2 00:19:01.698701 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:19:01.707920 systemd-logind[1947]: Removed session 27. Jul 2 00:19:06.726147 systemd[1]: Started sshd@27-172.31.23.160:22-147.75.109.163:46202.service - OpenSSH per-connection server daemon (147.75.109.163:46202). Jul 2 00:19:06.947027 sshd[6130]: Accepted publickey for core from 147.75.109.163 port 46202 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:19:06.949730 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:06.979669 systemd-logind[1947]: New session 28 of user core. Jul 2 00:19:07.010577 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:19:07.331944 sshd[6130]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:07.341436 systemd[1]: sshd@27-172.31.23.160:22-147.75.109.163:46202.service: Deactivated successfully. Jul 2 00:19:07.354925 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:19:07.360851 systemd-logind[1947]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:19:07.362739 systemd-logind[1947]: Removed session 28. Jul 2 00:19:12.367667 systemd[1]: Started sshd@28-172.31.23.160:22-147.75.109.163:46212.service - OpenSSH per-connection server daemon (147.75.109.163:46212). Jul 2 00:19:12.530807 sshd[6153]: Accepted publickey for core from 147.75.109.163 port 46212 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:19:12.534596 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:12.543242 systemd-logind[1947]: New session 29 of user core. Jul 2 00:19:12.547466 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:19:12.788736 sshd[6153]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:12.795555 systemd-logind[1947]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:19:12.798853 systemd[1]: sshd@28-172.31.23.160:22-147.75.109.163:46212.service: Deactivated successfully. Jul 2 00:19:12.802016 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:19:12.803607 systemd-logind[1947]: Removed session 29. Jul 2 00:19:17.853091 systemd[1]: Started sshd@29-172.31.23.160:22-147.75.109.163:36166.service - OpenSSH per-connection server daemon (147.75.109.163:36166). Jul 2 00:19:18.028310 sshd[6178]: Accepted publickey for core from 147.75.109.163 port 36166 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:19:18.030091 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:18.035881 systemd-logind[1947]: New session 30 of user core. Jul 2 00:19:18.043775 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:19:18.324760 sshd[6178]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:18.337769 systemd[1]: sshd@29-172.31.23.160:22-147.75.109.163:36166.service: Deactivated successfully. Jul 2 00:19:18.340963 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:19:18.343074 systemd-logind[1947]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:19:18.346771 systemd-logind[1947]: Removed session 30. Jul 2 00:19:33.197009 systemd[1]: cri-containerd-bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b.scope: Deactivated successfully. Jul 2 00:19:33.197319 systemd[1]: cri-containerd-bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b.scope: Consumed 7.256s CPU time. Jul 2 00:19:33.435684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b-rootfs.mount: Deactivated successfully. Jul 2 00:19:33.459649 containerd[1977]: time="2024-07-02T00:19:33.422201128Z" level=info msg="shim disconnected" id=bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b namespace=k8s.io Jul 2 00:19:33.459649 containerd[1977]: time="2024-07-02T00:19:33.459123425Z" level=warning msg="cleaning up after shim disconnected" id=bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b namespace=k8s.io Jul 2 00:19:33.459649 containerd[1977]: time="2024-07-02T00:19:33.459141835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:33.701068 kubelet[3407]: I0702 00:19:33.701025 3407 scope.go:117] "RemoveContainer" containerID="bddb493b1b430282bddadea978833b630f3307697b58b4463c0cd3634ec4c74b" Jul 2 00:19:33.709271 containerd[1977]: time="2024-07-02T00:19:33.709195786Z" level=info msg="CreateContainer within sandbox \"de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 00:19:33.785199 containerd[1977]: time="2024-07-02T00:19:33.784781341Z" level=info msg="CreateContainer within sandbox \"de3929a7636c8bca9a72565dc77decfc166da03b2e2a19c40b9f6ea24ea95916\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf\"" Jul 2 00:19:33.786244 containerd[1977]: time="2024-07-02T00:19:33.786201888Z" level=info msg="StartContainer for \"e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf\"" Jul 2 00:19:33.852491 systemd[1]: Started cri-containerd-e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf.scope - libcontainer container e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf. Jul 2 00:19:33.914038 systemd[1]: cri-containerd-fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8.scope: Deactivated successfully. Jul 2 00:19:33.914631 systemd[1]: cri-containerd-fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8.scope: Consumed 4.431s CPU time, 26.2M memory peak, 0B memory swap peak. Jul 2 00:19:33.916817 containerd[1977]: time="2024-07-02T00:19:33.916142519Z" level=info msg="StartContainer for \"e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf\" returns successfully" Jul 2 00:19:33.955338 containerd[1977]: time="2024-07-02T00:19:33.955267424Z" level=info msg="shim disconnected" id=fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8 namespace=k8s.io Jul 2 00:19:33.956030 containerd[1977]: time="2024-07-02T00:19:33.955578818Z" level=warning msg="cleaning up after shim disconnected" id=fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8 namespace=k8s.io Jul 2 00:19:33.956030 containerd[1977]: time="2024-07-02T00:19:33.955606349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:34.427525 systemd[1]: run-containerd-runc-k8s.io-e3a953920506edb35821429d263df96553a132f2c5700e63a27c353bcb7ccbbf-runc.zbfvsS.mount: Deactivated successfully. Jul 2 00:19:34.427664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8-rootfs.mount: Deactivated successfully. Jul 2 00:19:34.704001 kubelet[3407]: I0702 00:19:34.703846 3407 scope.go:117] "RemoveContainer" containerID="fce2dd2bacd6b6e00cc631cfc14b1814aeb8338060a5927f39519aadebcac6a8" Jul 2 00:19:34.710083 containerd[1977]: time="2024-07-02T00:19:34.710034349Z" level=info msg="CreateContainer within sandbox \"25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:19:34.751668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804321778.mount: Deactivated successfully. Jul 2 00:19:34.771251 containerd[1977]: time="2024-07-02T00:19:34.771180834Z" level=info msg="CreateContainer within sandbox \"25e26456ac0fa5351c40504e408672d2ae96708d8facb93479d4ac7f8ca8fccb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ca997a869b47020e229eab44cfc73f366a19d9d54e6bb6f3a6f28ea5f52a760a\"" Jul 2 00:19:34.771847 containerd[1977]: time="2024-07-02T00:19:34.771817081Z" level=info msg="StartContainer for \"ca997a869b47020e229eab44cfc73f366a19d9d54e6bb6f3a6f28ea5f52a760a\"" Jul 2 00:19:34.839507 systemd[1]: Started cri-containerd-ca997a869b47020e229eab44cfc73f366a19d9d54e6bb6f3a6f28ea5f52a760a.scope - libcontainer container ca997a869b47020e229eab44cfc73f366a19d9d54e6bb6f3a6f28ea5f52a760a. Jul 2 00:19:34.949816 containerd[1977]: time="2024-07-02T00:19:34.949742312Z" level=info msg="StartContainer for \"ca997a869b47020e229eab44cfc73f366a19d9d54e6bb6f3a6f28ea5f52a760a\" returns successfully" Jul 2 00:19:37.508953 systemd[1]: cri-containerd-f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b.scope: Deactivated successfully. Jul 2 00:19:37.509294 systemd[1]: cri-containerd-f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b.scope: Consumed 2.311s CPU time, 20.8M memory peak, 0B memory swap peak. Jul 2 00:19:37.557089 containerd[1977]: time="2024-07-02T00:19:37.556972355Z" level=info msg="shim disconnected" id=f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b namespace=k8s.io Jul 2 00:19:37.559777 containerd[1977]: time="2024-07-02T00:19:37.559287094Z" level=warning msg="cleaning up after shim disconnected" id=f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b namespace=k8s.io Jul 2 00:19:37.559777 containerd[1977]: time="2024-07-02T00:19:37.559491830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:37.562987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b-rootfs.mount: Deactivated successfully. Jul 2 00:19:37.747221 kubelet[3407]: I0702 00:19:37.747187 3407 scope.go:117] "RemoveContainer" containerID="f2d63e70bf630e061109ac6b0f1436ea8c75ecc386799e928e980d2ed81a481b" Jul 2 00:19:37.751130 containerd[1977]: time="2024-07-02T00:19:37.750805039Z" level=info msg="CreateContainer within sandbox \"4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:19:37.778129 containerd[1977]: time="2024-07-02T00:19:37.777920390Z" level=info msg="CreateContainer within sandbox \"4be6880771a2695355eb7006f6b2254f9e9b6efaa08fd5d25aad43faac05c304\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62\"" Jul 2 00:19:37.780491 containerd[1977]: time="2024-07-02T00:19:37.779862389Z" level=info msg="StartContainer for \"dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62\"" Jul 2 00:19:37.839521 systemd[1]: Started cri-containerd-dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62.scope - libcontainer container dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62. Jul 2 00:19:38.023157 containerd[1977]: time="2024-07-02T00:19:38.023103666Z" level=info msg="StartContainer for \"dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62\" returns successfully" Jul 2 00:19:38.564882 systemd[1]: run-containerd-runc-k8s.io-dcc41e0db79e2a091930260a451c8db030cc931c0fa7e7c94c84cbf23da97c62-runc.uBWqEJ.mount: Deactivated successfully. Jul 2 00:19:40.613216 kubelet[3407]: E0702 00:19:40.609784 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:19:50.618672 kubelet[3407]: E0702 00:19:50.618522 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-160?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"