Jul 2 00:22:49.126339 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:22:49.126391 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:49.126406 kernel: BIOS-provided physical RAM map: Jul 2 00:22:49.126417 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:22:49.126428 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:22:49.126439 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:22:49.126458 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 00:22:49.126470 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 00:22:49.126482 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 00:22:49.126493 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:22:49.126505 kernel: NX (Execute Disable) protection: active Jul 2 00:22:49.126517 kernel: APIC: Static calls initialized Jul 2 00:22:49.126530 kernel: SMBIOS 2.7 present. Jul 2 00:22:49.126542 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 00:22:49.126561 kernel: Hypervisor detected: KVM Jul 2 00:22:49.126574 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:22:49.126588 kernel: kvm-clock: using sched offset of 8016326455 cycles Jul 2 00:22:49.126603 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:22:49.126617 kernel: tsc: Detected 2499.992 MHz processor Jul 2 00:22:49.126631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:22:49.126646 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:22:49.126663 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 00:22:49.126677 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:22:49.126690 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:22:49.126704 kernel: Using GB pages for direct mapping Jul 2 00:22:49.126718 kernel: ACPI: Early table checksum verification disabled Jul 2 00:22:49.126732 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 00:22:49.126746 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 00:22:49.126760 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:22:49.126774 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 00:22:49.126791 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 00:22:49.126804 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:22:49.126818 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:22:49.126832 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 00:22:49.126846 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:22:49.126859 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 00:22:49.126873 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 00:22:49.126886 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:22:49.126903 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 00:22:49.126917 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 00:22:49.126937 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 00:22:49.126952 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 00:22:49.126966 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 00:22:49.126981 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 00:22:49.126999 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 00:22:49.127013 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 00:22:49.127028 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 00:22:49.127252 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 00:22:49.127273 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:22:49.127287 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:22:49.127302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 00:22:49.127317 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 00:22:49.127332 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 00:22:49.127352 kernel: Zone ranges: Jul 2 00:22:49.127367 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:22:49.127382 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 00:22:49.127397 kernel: Normal empty Jul 2 00:22:49.127411 kernel: Movable zone start for each node Jul 2 00:22:49.127426 kernel: Early memory node ranges Jul 2 00:22:49.127441 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:22:49.127455 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 00:22:49.127470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 00:22:49.127488 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:22:49.127503 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:22:49.127518 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 00:22:49.127533 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:22:49.127547 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:22:49.127562 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 00:22:49.127577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:22:49.127592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:22:49.127607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:22:49.127625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:22:49.127640 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:22:49.127655 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:22:49.127669 kernel: TSC deadline timer available Jul 2 00:22:49.127684 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:22:49.127700 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:22:49.127715 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 00:22:49.127729 kernel: Booting paravirtualized kernel on KVM Jul 2 00:22:49.127752 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:22:49.127767 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:22:49.127786 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:22:49.127801 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:22:49.127815 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:22:49.127829 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:22:49.127844 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:22:49.127861 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:49.127876 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:22:49.127894 kernel: random: crng init done Jul 2 00:22:49.127908 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:22:49.127923 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:22:49.127938 kernel: Fallback order for Node 0: 0 Jul 2 00:22:49.127952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 00:22:49.127967 kernel: Policy zone: DMA32 Jul 2 00:22:49.127982 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:22:49.127997 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:22:49.128012 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:22:49.128030 kernel: Kernel/User page tables isolation: enabled Jul 2 00:22:49.132094 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:22:49.132130 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:22:49.132146 kernel: Dynamic Preempt: voluntary Jul 2 00:22:49.132162 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:22:49.132179 kernel: rcu: RCU event tracing is enabled. Jul 2 00:22:49.132194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:22:49.132209 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:22:49.132224 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:22:49.132347 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:22:49.132372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:22:49.132388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:22:49.132403 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:22:49.132418 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:22:49.132433 kernel: Console: colour VGA+ 80x25 Jul 2 00:22:49.132448 kernel: printk: console [ttyS0] enabled Jul 2 00:22:49.132463 kernel: ACPI: Core revision 20230628 Jul 2 00:22:49.132478 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 00:22:49.132493 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:22:49.133318 kernel: x2apic enabled Jul 2 00:22:49.133336 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:22:49.133363 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Jul 2 00:22:49.133383 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499992) Jul 2 00:22:49.133399 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:22:49.133415 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:22:49.133431 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:22:49.133552 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:22:49.133569 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:22:49.133585 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:22:49.133601 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:22:49.133617 kernel: RETBleed: Vulnerable Jul 2 00:22:49.133637 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:22:49.133652 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:22:49.133667 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:22:49.133683 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:22:49.133698 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:22:49.133714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:22:49.133734 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:22:49.133749 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 00:22:49.133764 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 00:22:49.133780 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:22:49.133795 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:22:49.133811 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:22:49.133827 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 00:22:49.133842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:22:49.133857 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 00:22:49.133873 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 00:22:49.133888 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 00:22:49.133907 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 00:22:49.133922 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 00:22:49.133937 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 00:22:49.133953 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 00:22:49.133969 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:22:49.133984 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:22:49.134000 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:22:49.134015 kernel: SELinux: Initializing. Jul 2 00:22:49.134031 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:22:49.135135 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:22:49.135164 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:22:49.135181 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:49.135203 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:49.135219 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:49.135235 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:22:49.135252 kernel: signal: max sigframe size: 3632 Jul 2 00:22:49.135267 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:22:49.135284 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:22:49.135301 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:22:49.135316 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:22:49.135332 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:22:49.135351 kernel: .... node #0, CPUs: #1 Jul 2 00:22:49.135368 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 00:22:49.135386 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:22:49.135401 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:22:49.135417 kernel: smpboot: Max logical packages: 1 Jul 2 00:22:49.135433 kernel: smpboot: Total of 2 processors activated (9999.96 BogoMIPS) Jul 2 00:22:49.135449 kernel: devtmpfs: initialized Jul 2 00:22:49.135464 kernel: x86/mm: Memory block size: 128MB Jul 2 00:22:49.135483 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:22:49.135499 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:22:49.135515 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:22:49.135531 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:22:49.135547 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:22:49.135563 kernel: audit: type=2000 audit(1719879767.391:1): state=initialized audit_enabled=0 res=1 Jul 2 00:22:49.135578 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:22:49.135594 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:22:49.135610 kernel: cpuidle: using governor menu Jul 2 00:22:49.135630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:22:49.135645 kernel: dca service started, version 1.12.1 Jul 2 00:22:49.135661 kernel: PCI: Using configuration type 1 for base access Jul 2 00:22:49.135677 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:22:49.135693 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:22:49.135709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:22:49.135725 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:22:49.135748 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:22:49.135762 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:22:49.135781 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:22:49.135797 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:22:49.135813 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:22:49.135828 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 00:22:49.135844 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:22:49.135860 kernel: ACPI: Interpreter enabled Jul 2 00:22:49.135875 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:22:49.135891 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:22:49.135906 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:22:49.135926 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:22:49.135942 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 00:22:49.135958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:22:49.138233 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:22:49.138400 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:22:49.138598 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:22:49.138620 kernel: acpiphp: Slot [3] registered Jul 2 00:22:49.138644 kernel: acpiphp: Slot [4] registered Jul 2 00:22:49.138660 kernel: acpiphp: Slot [5] registered Jul 2 00:22:49.138676 kernel: acpiphp: Slot [6] registered Jul 2 00:22:49.138692 kernel: acpiphp: Slot [7] registered Jul 2 00:22:49.138708 kernel: acpiphp: Slot [8] registered Jul 2 00:22:49.138724 kernel: acpiphp: Slot [9] registered Jul 2 00:22:49.138740 kernel: acpiphp: Slot [10] registered Jul 2 00:22:49.138756 kernel: acpiphp: Slot [11] registered Jul 2 00:22:49.138772 kernel: acpiphp: Slot [12] registered Jul 2 00:22:49.138788 kernel: acpiphp: Slot [13] registered Jul 2 00:22:49.138808 kernel: acpiphp: Slot [14] registered Jul 2 00:22:49.138823 kernel: acpiphp: Slot [15] registered Jul 2 00:22:49.138839 kernel: acpiphp: Slot [16] registered Jul 2 00:22:49.138855 kernel: acpiphp: Slot [17] registered Jul 2 00:22:49.138870 kernel: acpiphp: Slot [18] registered Jul 2 00:22:49.138886 kernel: acpiphp: Slot [19] registered Jul 2 00:22:49.138902 kernel: acpiphp: Slot [20] registered Jul 2 00:22:49.138918 kernel: acpiphp: Slot [21] registered Jul 2 00:22:49.138934 kernel: acpiphp: Slot [22] registered Jul 2 00:22:49.138953 kernel: acpiphp: Slot [23] registered Jul 2 00:22:49.138969 kernel: acpiphp: Slot [24] registered Jul 2 00:22:49.138985 kernel: acpiphp: Slot [25] registered Jul 2 00:22:49.139001 kernel: acpiphp: Slot [26] registered Jul 2 00:22:49.139017 kernel: acpiphp: Slot [27] registered Jul 2 00:22:49.139033 kernel: acpiphp: Slot [28] registered Jul 2 00:22:49.141101 kernel: acpiphp: Slot [29] registered Jul 2 00:22:49.141126 kernel: acpiphp: Slot [30] registered Jul 2 00:22:49.141143 kernel: acpiphp: Slot [31] registered Jul 2 00:22:49.141159 kernel: PCI host bridge to bus 0000:00 Jul 2 00:22:49.141356 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:22:49.141488 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:22:49.141612 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:22:49.141735 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:22:49.141921 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:22:49.147619 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:22:49.147826 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:22:49.147988 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 00:22:49.149206 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:22:49.149367 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:22:49.149673 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 00:22:49.149820 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 00:22:49.152569 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 00:22:49.152753 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 00:22:49.152974 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 00:22:49.157344 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 00:22:49.157722 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 00:22:49.158022 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 00:22:49.158273 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:22:49.158413 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:22:49.158569 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:22:49.158706 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 00:22:49.158924 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:22:49.161143 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 00:22:49.161178 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:22:49.161195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:22:49.161211 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:22:49.161235 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:22:49.161251 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:22:49.161267 kernel: iommu: Default domain type: Translated Jul 2 00:22:49.161283 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:22:49.161298 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:22:49.161314 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:22:49.161331 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:22:49.161345 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 00:22:49.161695 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 00:22:49.161848 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 00:22:49.161987 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:22:49.162007 kernel: vgaarb: loaded Jul 2 00:22:49.162024 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 00:22:49.162040 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 00:22:49.165109 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:22:49.165128 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:22:49.165145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:22:49.165161 kernel: pnp: PnP ACPI init Jul 2 00:22:49.165184 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:22:49.165200 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:22:49.165215 kernel: NET: Registered PF_INET protocol family Jul 2 00:22:49.165230 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:22:49.165246 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:22:49.165261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:22:49.165276 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:22:49.165292 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:22:49.165309 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:22:49.165324 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:22:49.165339 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:22:49.165354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:22:49.165368 kernel: NET: Registered PF_XDP protocol family Jul 2 00:22:49.165611 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:22:49.165728 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:22:49.165837 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:22:49.165950 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:22:49.168135 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:22:49.168168 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:22:49.168185 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:22:49.168200 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Jul 2 00:22:49.168215 kernel: clocksource: Switched to clocksource tsc Jul 2 00:22:49.168229 kernel: Initialise system trusted keyrings Jul 2 00:22:49.168244 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:22:49.168259 kernel: Key type asymmetric registered Jul 2 00:22:49.168279 kernel: Asymmetric key parser 'x509' registered Jul 2 00:22:49.168293 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:22:49.168308 kernel: io scheduler mq-deadline registered Jul 2 00:22:49.168323 kernel: io scheduler kyber registered Jul 2 00:22:49.168337 kernel: io scheduler bfq registered Jul 2 00:22:49.168353 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:22:49.168367 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:22:49.168382 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:22:49.168396 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:22:49.168414 kernel: i8042: Warning: Keylock active Jul 2 00:22:49.168428 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:22:49.168443 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:22:49.168647 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 00:22:49.168772 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 00:22:49.169115 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T00:22:48 UTC (1719879768) Jul 2 00:22:49.169247 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 00:22:49.169267 kernel: intel_pstate: CPU model not supported Jul 2 00:22:49.169289 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:22:49.169306 kernel: Segment Routing with IPv6 Jul 2 00:22:49.169322 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:22:49.169338 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:22:49.169354 kernel: Key type dns_resolver registered Jul 2 00:22:49.169370 kernel: IPI shorthand broadcast: enabled Jul 2 00:22:49.169387 kernel: sched_clock: Marking stable (732002458, 342652768)->(1188792627, -114137401) Jul 2 00:22:49.169403 kernel: registered taskstats version 1 Jul 2 00:22:49.169482 kernel: Loading compiled-in X.509 certificates Jul 2 00:22:49.169504 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:22:49.169520 kernel: Key type .fscrypt registered Jul 2 00:22:49.169536 kernel: Key type fscrypt-provisioning registered Jul 2 00:22:49.169553 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:22:49.169569 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:22:49.169584 kernel: ima: No architecture policies found Jul 2 00:22:49.169600 kernel: clk: Disabling unused clocks Jul 2 00:22:49.169616 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:22:49.169633 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:22:49.169652 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:22:49.169668 kernel: Run /init as init process Jul 2 00:22:49.169684 kernel: with arguments: Jul 2 00:22:49.169700 kernel: /init Jul 2 00:22:49.169715 kernel: with environment: Jul 2 00:22:49.169730 kernel: HOME=/ Jul 2 00:22:49.169746 kernel: TERM=linux Jul 2 00:22:49.169762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:22:49.169782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:22:49.169805 systemd[1]: Detected virtualization amazon. Jul 2 00:22:49.169840 systemd[1]: Detected architecture x86-64. Jul 2 00:22:49.169857 systemd[1]: Running in initrd. Jul 2 00:22:49.169874 systemd[1]: No hostname configured, using default hostname. Jul 2 00:22:49.169890 systemd[1]: Hostname set to . Jul 2 00:22:49.169911 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:22:49.169928 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:22:49.169945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:22:49.169963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:22:49.169981 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:22:49.169999 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:22:49.170016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:22:49.170037 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:22:49.173492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:22:49.173514 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:22:49.173533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:22:49.173551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:22:49.173569 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:22:49.173588 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:22:49.173612 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:22:49.173630 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:22:49.173648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:22:49.173665 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:22:49.173683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:22:49.173701 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:22:49.173719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:22:49.173737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:22:49.173755 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:22:49.173776 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:22:49.173794 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:22:49.173811 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:22:49.173829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:22:49.173847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:22:49.173865 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:22:49.173883 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:22:49.173904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:22:49.173921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:49.174032 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:22:49.174063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:22:49.174082 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:22:49.174203 systemd-journald[178]: Collecting audit messages is disabled. Jul 2 00:22:49.174245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:22:49.174268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:22:49.174286 systemd-journald[178]: Journal started Jul 2 00:22:49.174321 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2ccc458a5d16cb9f4824d3a7274baf) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:22:49.122802 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 00:22:49.309199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:22:49.309248 kernel: Bridge firewalling registered Jul 2 00:22:49.309276 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:22:49.189742 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 00:22:49.305972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:22:49.306911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:49.313456 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:49.316221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:22:49.333839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:22:49.344481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:22:49.367497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:22:49.368427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:22:49.374247 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:22:49.385287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:22:49.392119 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:49.403395 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:22:49.428268 dracut-cmdline[212]: dracut-dracut-053 Jul 2 00:22:49.433346 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:49.480506 systemd-resolved[207]: Positive Trust Anchors: Jul 2 00:22:49.480529 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:22:49.480582 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:22:49.496794 systemd-resolved[207]: Defaulting to hostname 'linux'. Jul 2 00:22:49.504934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:22:49.507468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:22:49.588081 kernel: SCSI subsystem initialized Jul 2 00:22:49.603176 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:22:49.622071 kernel: iscsi: registered transport (tcp) Jul 2 00:22:49.659097 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:22:49.659481 kernel: QLogic iSCSI HBA Driver Jul 2 00:22:49.738193 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:22:49.749354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:22:49.790292 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:22:49.790379 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:22:49.790400 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:22:49.845075 kernel: raid6: avx512x4 gen() 14132 MB/s Jul 2 00:22:49.862073 kernel: raid6: avx512x2 gen() 16416 MB/s Jul 2 00:22:49.880082 kernel: raid6: avx512x1 gen() 12512 MB/s Jul 2 00:22:49.897081 kernel: raid6: avx2x4 gen() 15552 MB/s Jul 2 00:22:49.914075 kernel: raid6: avx2x2 gen() 14692 MB/s Jul 2 00:22:49.931078 kernel: raid6: avx2x1 gen() 10791 MB/s Jul 2 00:22:49.931239 kernel: raid6: using algorithm avx512x2 gen() 16416 MB/s Jul 2 00:22:49.948407 kernel: raid6: .... xor() 19729 MB/s, rmw enabled Jul 2 00:22:49.948640 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:22:49.990079 kernel: xor: automatically using best checksumming function avx Jul 2 00:22:50.262077 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:22:50.273427 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:22:50.280672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:22:50.309202 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:22:50.316100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:22:50.325306 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:22:50.358345 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jul 2 00:22:50.402354 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:22:50.410284 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:22:50.540534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:22:50.563733 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:22:50.591031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:22:50.595800 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:22:50.598604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:22:50.605323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:22:50.616112 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:22:50.678409 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:22:50.705354 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:22:50.705543 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 00:22:50.705705 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:07:0b:c0:8d:4d Jul 2 00:22:50.679201 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:22:50.712336 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:50.717688 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:22:50.734561 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:22:50.734664 kernel: AES CTR mode by8 optimization enabled Jul 2 00:22:50.736238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:22:50.736428 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:50.739019 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:50.740932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:22:50.741161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:50.750251 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:50.755174 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:22:50.755402 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:22:50.760122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:50.776085 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:22:50.785909 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:22:50.785979 kernel: GPT:9289727 != 16777215 Jul 2 00:22:50.786007 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:22:50.786025 kernel: GPT:9289727 != 16777215 Jul 2 00:22:50.786056 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:22:50.786075 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:50.966389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:50.973362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:51.013126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:51.081512 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (447) Jul 2 00:22:51.088161 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jul 2 00:22:51.160964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:22:51.199676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:22:51.243078 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:22:51.259450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:22:51.259616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:22:51.270597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:22:51.285334 disk-uuid[632]: Primary Header is updated. Jul 2 00:22:51.285334 disk-uuid[632]: Secondary Entries is updated. Jul 2 00:22:51.285334 disk-uuid[632]: Secondary Header is updated. Jul 2 00:22:51.292068 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:51.302797 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:52.338755 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:52.338831 disk-uuid[633]: The operation has completed successfully. Jul 2 00:22:52.548640 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:22:52.548766 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:22:52.598283 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:22:52.604480 sh[976]: Success Jul 2 00:22:52.631091 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:22:52.748016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:22:52.758186 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:22:52.763642 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:22:52.802124 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:22:52.802302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:52.802337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:22:52.804240 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:22:52.804283 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:22:52.894095 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:22:52.927255 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:22:52.929778 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:22:52.938838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:22:52.958016 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:22:52.990896 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:52.990970 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:52.990990 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:22:53.000085 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:22:53.021003 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:53.021504 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:22:53.045405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:22:53.057034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:22:53.129169 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:22:53.138456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:22:53.195785 systemd-networkd[1168]: lo: Link UP Jul 2 00:22:53.195799 systemd-networkd[1168]: lo: Gained carrier Jul 2 00:22:53.199208 systemd-networkd[1168]: Enumeration completed Jul 2 00:22:53.199350 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:22:53.201069 systemd[1]: Reached target network.target - Network. Jul 2 00:22:53.208683 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:22:53.209155 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:22:53.217956 systemd-networkd[1168]: eth0: Link UP Jul 2 00:22:53.218186 systemd-networkd[1168]: eth0: Gained carrier Jul 2 00:22:53.218928 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:22:53.229161 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.19.56/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:22:53.510530 ignition[1103]: Ignition 2.18.0 Jul 2 00:22:53.510542 ignition[1103]: Stage: fetch-offline Jul 2 00:22:53.510826 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:53.510835 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:53.511567 ignition[1103]: Ignition finished successfully Jul 2 00:22:53.520088 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:22:53.532450 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:22:53.561616 ignition[1179]: Ignition 2.18.0 Jul 2 00:22:53.561631 ignition[1179]: Stage: fetch Jul 2 00:22:53.563033 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:53.563109 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:53.563483 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:53.628479 ignition[1179]: PUT result: OK Jul 2 00:22:53.644059 ignition[1179]: parsed url from cmdline: "" Jul 2 00:22:53.644072 ignition[1179]: no config URL provided Jul 2 00:22:53.644084 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:22:53.644100 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:22:53.644135 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:53.650028 ignition[1179]: PUT result: OK Jul 2 00:22:53.650222 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:22:53.657132 ignition[1179]: GET result: OK Jul 2 00:22:53.657252 ignition[1179]: parsing config with SHA512: dddf21c278ce2d5ac76a6a7a3ee4817cc98f3cee580d58d883854fbc8e95285492c09d244a64765f9c9723c85c325cced121fac5ec92e7d6c1166064a3f944c4 Jul 2 00:22:53.673705 unknown[1179]: fetched base config from "system" Jul 2 00:22:53.674138 unknown[1179]: fetched base config from "system" Jul 2 00:22:53.675949 ignition[1179]: fetch: fetch complete Jul 2 00:22:53.674151 unknown[1179]: fetched user config from "aws" Jul 2 00:22:53.675959 ignition[1179]: fetch: fetch passed Jul 2 00:22:53.680982 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:22:53.676038 ignition[1179]: Ignition finished successfully Jul 2 00:22:53.692437 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:22:53.722376 ignition[1186]: Ignition 2.18.0 Jul 2 00:22:53.722392 ignition[1186]: Stage: kargs Jul 2 00:22:53.722857 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:53.722871 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:53.723083 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:53.728027 ignition[1186]: PUT result: OK Jul 2 00:22:53.732370 ignition[1186]: kargs: kargs passed Jul 2 00:22:53.733080 ignition[1186]: Ignition finished successfully Jul 2 00:22:53.736486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:22:53.760876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:22:53.816165 ignition[1193]: Ignition 2.18.0 Jul 2 00:22:53.816179 ignition[1193]: Stage: disks Jul 2 00:22:53.817477 ignition[1193]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:53.817497 ignition[1193]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:53.818607 ignition[1193]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:53.823245 ignition[1193]: PUT result: OK Jul 2 00:22:53.826620 ignition[1193]: disks: disks passed Jul 2 00:22:53.826680 ignition[1193]: Ignition finished successfully Jul 2 00:22:53.830435 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:22:53.833791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:22:53.846999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:22:53.847132 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:22:53.852738 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:22:53.853813 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:22:53.872543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:22:53.966019 systemd-fsck[1202]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:22:53.971349 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:22:53.984365 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:22:54.236337 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:22:54.237156 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:22:54.238098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:22:54.264336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:22:54.283954 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:22:54.285572 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:22:54.285644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:22:54.285678 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:22:54.294924 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:22:54.305297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:22:54.317071 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1221) Jul 2 00:22:54.319488 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:54.319545 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:54.319564 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:22:54.333018 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:22:54.334854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:22:54.760983 initrd-setup-root[1245]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:22:54.789869 initrd-setup-root[1252]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:22:54.797621 initrd-setup-root[1259]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:22:54.805788 initrd-setup-root[1266]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:22:55.092188 systemd-networkd[1168]: eth0: Gained IPv6LL Jul 2 00:22:55.116610 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:22:55.127889 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:22:55.140474 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:22:55.155611 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:22:55.157724 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:55.201634 ignition[1339]: INFO : Ignition 2.18.0 Jul 2 00:22:55.205603 ignition[1339]: INFO : Stage: mount Jul 2 00:22:55.205603 ignition[1339]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:55.205603 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:55.205603 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:55.205676 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:22:55.218367 ignition[1339]: INFO : PUT result: OK Jul 2 00:22:55.220817 ignition[1339]: INFO : mount: mount passed Jul 2 00:22:55.221895 ignition[1339]: INFO : Ignition finished successfully Jul 2 00:22:55.224677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:22:55.232230 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:22:55.256527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:22:55.280078 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1351) Jul 2 00:22:55.285058 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:55.285232 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:55.285748 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:22:55.292077 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:22:55.295977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:22:55.338084 ignition[1368]: INFO : Ignition 2.18.0 Jul 2 00:22:55.338084 ignition[1368]: INFO : Stage: files Jul 2 00:22:55.342391 ignition[1368]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:55.342391 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:55.342391 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:55.346867 ignition[1368]: INFO : PUT result: OK Jul 2 00:22:55.351175 ignition[1368]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:22:55.362922 ignition[1368]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:22:55.362922 ignition[1368]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:22:55.400413 ignition[1368]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:22:55.403971 ignition[1368]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:22:55.403971 ignition[1368]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:22:55.401505 unknown[1368]: wrote ssh authorized keys file for user: core Jul 2 00:22:55.421525 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:22:55.421525 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:22:55.502285 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:22:55.636038 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:22:55.636038 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:22:55.640885 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:22:55.930648 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:22:56.380597 ignition[1368]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:22:56.380597 ignition[1368]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:22:56.384514 ignition[1368]: INFO : files: files passed Jul 2 00:22:56.384514 ignition[1368]: INFO : Ignition finished successfully Jul 2 00:22:56.401583 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:22:56.410300 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:22:56.428342 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:22:56.434923 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:22:56.435109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:22:56.449663 initrd-setup-root-after-ignition[1397]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:22:56.449663 initrd-setup-root-after-ignition[1397]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:22:56.465248 initrd-setup-root-after-ignition[1401]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:22:56.468584 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:22:56.471801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:22:56.485760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:22:56.566508 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:22:56.566786 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:22:56.570887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:22:56.572333 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:22:56.577089 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:22:56.584322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:22:56.613681 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:22:56.624398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:22:56.659560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:22:56.662081 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:22:56.664707 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:22:56.666331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:22:56.666502 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:22:56.668925 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:22:56.672511 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:22:56.675899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:22:56.677734 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:22:56.678666 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:22:56.679001 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:22:56.679810 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:22:56.680969 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:22:56.684124 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:22:56.684878 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:22:56.686617 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:22:56.686830 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:22:56.688081 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:22:56.688711 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:22:56.689161 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:22:56.696645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:22:56.700418 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:22:56.701106 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:22:56.708082 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:22:56.708289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:22:56.713205 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:22:56.713384 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:22:56.737524 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:22:56.769798 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:22:56.778428 ignition[1421]: INFO : Ignition 2.18.0 Jul 2 00:22:56.778428 ignition[1421]: INFO : Stage: umount Jul 2 00:22:56.798212 ignition[1421]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:22:56.798212 ignition[1421]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:22:56.798212 ignition[1421]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:22:56.798212 ignition[1421]: INFO : PUT result: OK Jul 2 00:22:56.798212 ignition[1421]: INFO : umount: umount passed Jul 2 00:22:56.798212 ignition[1421]: INFO : Ignition finished successfully Jul 2 00:22:56.778795 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:22:56.779025 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:22:56.781654 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:22:56.782237 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:22:56.797872 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:22:56.799340 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:22:56.802351 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:22:56.804171 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:22:56.808795 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:22:56.808880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:22:56.810080 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:22:56.810285 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:22:56.812390 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:22:56.812451 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:22:56.814812 systemd[1]: Stopped target network.target - Network. Jul 2 00:22:56.816309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:22:56.816488 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:22:56.817760 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:22:56.818815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:22:56.822742 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:22:56.829185 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:22:56.830960 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:22:56.834659 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:22:56.834727 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:22:56.849012 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:22:56.849105 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:22:56.857587 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:22:56.860341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:22:56.866280 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:22:56.871162 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:22:56.879019 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:22:56.891465 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:22:56.902865 systemd-networkd[1168]: eth0: DHCPv6 lease lost Jul 2 00:22:56.904613 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:22:56.905496 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:22:56.905626 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:22:56.911085 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:22:56.911212 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:22:56.914733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:22:56.914788 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:22:56.928259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:22:56.931946 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:22:56.932019 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:22:56.934573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:22:56.934632 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:22:56.938309 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:22:56.938366 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:22:56.941327 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:22:56.941378 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:22:56.947642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:22:56.978778 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:22:56.981024 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:22:56.996248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:22:56.996322 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:22:56.999501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:22:56.999570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:22:57.002723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:22:57.002811 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:22:57.005284 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:22:57.005358 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:22:57.007740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:22:57.007842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:57.025517 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:22:57.027663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:22:57.027790 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:22:57.032743 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:22:57.032826 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:22:57.038330 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:22:57.038411 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:22:57.046488 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:22:57.046689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:57.055419 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:22:57.055546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:22:57.058751 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:22:57.058905 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:22:57.062156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:22:57.062292 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:22:57.076160 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:22:57.076265 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:22:57.077851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:22:57.089265 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:22:57.128424 systemd[1]: Switching root. Jul 2 00:22:57.186135 systemd-journald[178]: Journal stopped Jul 2 00:23:00.202929 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:00.203022 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:00.208176 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:00.208221 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:00.208241 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:00.208260 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:00.208290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:00.208310 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:00.208335 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:00.208353 kernel: audit: type=1403 audit(1719879778.166:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:00.208376 systemd[1]: Successfully loaded SELinux policy in 55.491ms. Jul 2 00:23:00.208415 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.820ms. Jul 2 00:23:00.208441 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:00.208463 systemd[1]: Detected virtualization amazon. Jul 2 00:23:00.208484 systemd[1]: Detected architecture x86-64. Jul 2 00:23:00.208505 systemd[1]: Detected first boot. Jul 2 00:23:00.208527 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:00.208552 zram_generator::config[1463]: No configuration found. Jul 2 00:23:00.208735 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:00.208761 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:23:00.208783 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:23:00.208805 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:00.208832 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:00.208854 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:00.208876 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:00.208901 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:00.208924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:00.208998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:00.209023 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:00.217396 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:00.217462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:00.217491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:00.217511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:00.218738 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:00.218792 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:00.218817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:00.218839 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:23:00.218862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:00.218884 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:23:00.218905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:23:00.218928 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:00.218953 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:00.218975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:00.218997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:00.219018 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:00.219040 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:00.228203 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:00.228234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:00.228256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:00.228279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:00.228301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:00.228331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:00.228354 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:00.228375 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:00.228397 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:00.228418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:00.228440 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:00.228462 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:00.228483 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:00.228509 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:00.228531 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:00.228552 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:00.228624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:00.228647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:00.228669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:00.228691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:00.228714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:00.228737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:00.228762 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:00.228783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:00.228806 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:00.228828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:23:00.228850 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:23:00.228871 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:23:00.228893 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:23:00.228915 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:00.228940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:00.228961 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:00.228983 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:00.229004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:00.229027 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:23:00.234108 systemd[1]: Stopped verity-setup.service. Jul 2 00:23:00.234158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:00.234182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:00.234204 kernel: fuse: init (API version 7.39) Jul 2 00:23:00.234235 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:00.234256 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:00.234278 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:00.234300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:00.234321 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:00.234346 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:00.234368 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:00.234389 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:00.234411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:00.234434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:00.234456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:00.234477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:00.234499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:00.234524 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:00.234551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:00.234612 systemd-journald[1537]: Collecting audit messages is disabled. Jul 2 00:23:00.234651 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:00.234673 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:00.234697 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:00.234719 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:00.234741 systemd-journald[1537]: Journal started Jul 2 00:23:00.234784 systemd-journald[1537]: Runtime Journal (/run/log/journal/ec2ccc458a5d16cb9f4824d3a7274baf) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:23:00.260254 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:00.260328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:22:59.428540 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:00.266588 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:22:59.480266 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:22:59.487611 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:23:00.279193 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:00.279272 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:00.279831 kernel: loop: module loaded Jul 2 00:23:00.293701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:00.306076 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:00.310317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:00.321069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:00.324163 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:00.335936 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:00.345484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:00.349514 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:00.362286 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:00.368081 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:00.371343 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:00.380677 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:00.380999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:00.385237 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:00.386011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:00.389560 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:00.391633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:00.397158 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:00.445610 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:00.447456 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:00.465407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:00.473118 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:00.493057 kernel: loop0: detected capacity change from 0 to 60984 Jul 2 00:23:00.493264 systemd-journald[1537]: Time spent on flushing to /var/log/journal/ec2ccc458a5d16cb9f4824d3a7274baf is 242.765ms for 962 entries. Jul 2 00:23:00.493264 systemd-journald[1537]: System Journal (/var/log/journal/ec2ccc458a5d16cb9f4824d3a7274baf) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:23:00.754207 systemd-journald[1537]: Received client request to flush runtime journal. Jul 2 00:23:00.754313 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:00.754596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:00.487333 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:00.504549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:00.555253 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:00.639127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:00.644603 udevadm[1597]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:00.757224 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jul 2 00:23:00.757248 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jul 2 00:23:00.758395 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:00.771862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:00.781181 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 00:23:00.787304 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:00.803536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:00.806218 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:00.882211 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:23:00.904901 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:00.919193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:00.995738 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jul 2 00:23:00.995776 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jul 2 00:23:01.007109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:01.045083 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:23:01.214073 kernel: loop4: detected capacity change from 0 to 60984 Jul 2 00:23:01.257095 kernel: loop5: detected capacity change from 0 to 211296 Jul 2 00:23:01.307071 kernel: loop6: detected capacity change from 0 to 139904 Jul 2 00:23:01.397983 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 00:23:01.512624 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:23:01.523509 (sd-merge)[1616]: Merged extensions into '/usr'. Jul 2 00:23:01.555519 systemd[1]: Reloading requested from client PID 1570 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:01.555549 systemd[1]: Reloading... Jul 2 00:23:01.743758 zram_generator::config[1640]: No configuration found. Jul 2 00:23:02.337260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:02.570116 systemd[1]: Reloading finished in 1009 ms. Jul 2 00:23:02.641652 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:02.677468 ldconfig[1566]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:02.677344 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:02.708344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:02.710347 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:02.715634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:02.756374 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:02.774309 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:02.774331 systemd[1]: Reloading... Jul 2 00:23:02.794819 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:02.796527 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:02.814601 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:02.821627 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jul 2 00:23:02.821913 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jul 2 00:23:02.837999 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:02.838015 systemd-tmpfiles[1689]: Skipping /boot Jul 2 00:23:02.889721 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:02.889897 systemd-tmpfiles[1689]: Skipping /boot Jul 2 00:23:02.991678 systemd-udevd[1693]: Using default interface naming scheme 'v255'. Jul 2 00:23:03.127125 zram_generator::config[1719]: No configuration found. Jul 2 00:23:03.325250 (udev-worker)[1750]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:03.359731 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1753) Jul 2 00:23:03.510067 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 00:23:03.542530 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1741) Jul 2 00:23:03.542594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:23:03.542753 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:23:03.542786 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 00:23:03.523989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:03.554599 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 00:23:03.571092 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 00:23:03.661878 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:23:03.663458 systemd[1]: Reloading finished in 885 ms. Jul 2 00:23:03.687093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:03.702759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:03.741407 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:03.750293 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:03.758293 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:03.766305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:03.779273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:03.782282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:03.793847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.794408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:03.804362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:03.814984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:03.820601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:03.822696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:03.822906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.838345 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:03.840299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:03.840812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:03.858822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.859151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:03.889474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:03.890927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:03.891137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.898661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:03.900265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:03.904827 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:03.911998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:03.925516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.926301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:03.932717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:03.944706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:03.949424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:03.949753 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:03.951140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:03.954329 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:03.982587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:03.982800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:04.005760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:04.023718 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:04.027635 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:04.027846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:04.029866 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:04.030066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:04.031734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:04.031917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:04.080086 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:04.077954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:04.078172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:04.090494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:04.119817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:23:04.123148 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:04.126936 augenrules[1914]: No rules Jul 2 00:23:04.134321 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:04.135870 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:04.137470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:04.139865 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:04.145960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:04.162547 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:04.163953 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:04.219361 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:04.235524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:04.281910 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:04.284578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:04.294355 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:04.326926 systemd-networkd[1872]: lo: Link UP Jul 2 00:23:04.328665 lvm[1936]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:04.326942 systemd-networkd[1872]: lo: Gained carrier Jul 2 00:23:04.329309 systemd-networkd[1872]: Enumeration completed Jul 2 00:23:04.329919 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:04.329924 systemd-networkd[1872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:04.330299 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:04.340473 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:04.345781 systemd-resolved[1873]: Positive Trust Anchors: Jul 2 00:23:04.345809 systemd-resolved[1873]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:04.345858 systemd-resolved[1873]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:04.349451 systemd-networkd[1872]: eth0: Link UP Jul 2 00:23:04.349716 systemd-networkd[1872]: eth0: Gained carrier Jul 2 00:23:04.349754 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:04.358428 systemd-networkd[1872]: eth0: DHCPv4 address 172.31.19.56/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:23:04.359834 systemd-resolved[1873]: Defaulting to hostname 'linux'. Jul 2 00:23:04.374252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:04.487858 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:04.489769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:04.492755 systemd[1]: Reached target network.target - Network. Jul 2 00:23:04.493845 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:04.496724 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:04.498823 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:04.500305 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:04.502317 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:04.503561 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:04.505106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:04.507500 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:04.507530 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:04.508704 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:04.512161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:04.516502 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:04.529700 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:04.536537 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:04.538315 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:04.541027 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:04.547530 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:04.547698 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:04.557436 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:04.568417 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:23:04.583299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:04.587364 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:04.594458 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:04.596679 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:04.607447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:04.612700 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:23:04.623111 jq[1948]: false Jul 2 00:23:04.626376 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:23:04.637241 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:23:04.644353 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:04.651312 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:04.671259 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:04.677298 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:04.678173 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:04.693667 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:04.708314 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:04.721576 dbus-daemon[1947]: [system] SELinux support is enabled Jul 2 00:23:04.722226 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:04.730002 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:04.730259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:04.740617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:04.742807 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:04.770407 dbus-daemon[1947]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1872 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:04.782072 extend-filesystems[1949]: Found loop4 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found loop5 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found loop6 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found loop7 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p1 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p2 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p3 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found usr Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p4 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p6 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p7 Jul 2 00:23:04.782072 extend-filesystems[1949]: Found nvme0n1p9 Jul 2 00:23:04.782072 extend-filesystems[1949]: Checking size of /dev/nvme0n1p9 Jul 2 00:23:04.824394 jq[1959]: true Jul 2 00:23:04.824531 update_engine[1958]: I0702 00:23:04.794176 1958 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:04.824531 update_engine[1958]: I0702 00:23:04.823261 1958 update_check_scheduler.cc:74] Next update check in 8m13s Jul 2 00:23:04.842539 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:23:04.848989 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:04.869746 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:04.869833 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:04.891029 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:23:04.892428 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:04.892468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:04.905597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:04.909246 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:04.945341 tar[1964]: linux-amd64/helm Jul 2 00:23:04.951924 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:04.956184 jq[1975]: true Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: ---------------------------------------------------- Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: corporation. Support and training for ntp-4 are Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: available at https://www.nwtime.org/support Jul 2 00:23:04.956401 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: ---------------------------------------------------- Jul 2 00:23:04.951955 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:04.951965 ntpd[1951]: ---------------------------------------------------- Jul 2 00:23:04.951975 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:04.951984 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:04.951994 ntpd[1951]: corporation. Support and training for ntp-4 are Jul 2 00:23:04.952003 ntpd[1951]: available at https://www.nwtime.org/support Jul 2 00:23:04.952013 ntpd[1951]: ---------------------------------------------------- Jul 2 00:23:04.979704 ntpd[1951]: proto: precision = 0.065 usec (-24) Jul 2 00:23:04.985762 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:04.991348 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: proto: precision = 0.065 usec (-24) Jul 2 00:23:04.991348 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: basedate set to 2024-06-19 Jul 2 00:23:04.991348 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:04.991456 coreos-metadata[1946]: Jul 02 00:23:04.982 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:04.991456 coreos-metadata[1946]: Jul 02 00:23:04.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:23:04.991837 extend-filesystems[1949]: Resized partition /dev/nvme0n1p9 Jul 2 00:23:04.986321 ntpd[1951]: basedate set to 2024-06-19 Jul 2 00:23:04.986795 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listen normally on 3 eth0 172.31.19.56:123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: bind(21) AF_INET6 fe80::407:bff:fec0:8d4d%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: unable to create socket on eth0 (5) for fe80::407:bff:fec0:8d4d%2#123 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: failed to init interface for address fe80::407:bff:fec0:8d4d%2 Jul 2 00:23:04.999965 ntpd[1951]: 2 Jul 00:23:04 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Jul 2 00:23:05.000346 extend-filesystems[2000]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:23:04.986344 ntpd[1951]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:05.004328 coreos-metadata[1946]: Jul 02 00:23:05.002 INFO Fetch successful Jul 2 00:23:05.004328 coreos-metadata[1946]: Jul 02 00:23:05.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:23:04.998372 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:04.998442 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:04.998648 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:04.998682 ntpd[1951]: Listen normally on 3 eth0 172.31.19.56:123 Jul 2 00:23:04.998722 ntpd[1951]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:04.998761 ntpd[1951]: bind(21) AF_INET6 fe80::407:bff:fec0:8d4d%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:04.998783 ntpd[1951]: unable to create socket on eth0 (5) for fe80::407:bff:fec0:8d4d%2#123 Jul 2 00:23:04.998798 ntpd[1951]: failed to init interface for address fe80::407:bff:fec0:8d4d%2 Jul 2 00:23:04.998827 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Jul 2 00:23:05.015420 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:23:05.015712 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:05.016618 coreos-metadata[1946]: Jul 02 00:23:05.016 INFO Fetch successful Jul 2 00:23:05.016618 coreos-metadata[1946]: Jul 02 00:23:05.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:23:05.016703 ntpd[1951]: 2 Jul 00:23:05 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:05.016703 ntpd[1951]: 2 Jul 00:23:05 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:05.015751 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:05.020351 coreos-metadata[1946]: Jul 02 00:23:05.019 INFO Fetch successful Jul 2 00:23:05.020351 coreos-metadata[1946]: Jul 02 00:23:05.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:23:05.028208 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:23:05.029290 coreos-metadata[1946]: Jul 02 00:23:05.025 INFO Fetch successful Jul 2 00:23:05.029290 coreos-metadata[1946]: Jul 02 00:23:05.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:23:05.035067 coreos-metadata[1946]: Jul 02 00:23:05.033 INFO Fetch failed with 404: resource not found Jul 2 00:23:05.035067 coreos-metadata[1946]: Jul 02 00:23:05.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:23:05.035067 coreos-metadata[1946]: Jul 02 00:23:05.034 INFO Fetch successful Jul 2 00:23:05.035067 coreos-metadata[1946]: Jul 02 00:23:05.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:23:05.035433 coreos-metadata[1946]: Jul 02 00:23:05.035 INFO Fetch successful Jul 2 00:23:05.035433 coreos-metadata[1946]: Jul 02 00:23:05.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:23:05.040124 coreos-metadata[1946]: Jul 02 00:23:05.040 INFO Fetch successful Jul 2 00:23:05.040124 coreos-metadata[1946]: Jul 02 00:23:05.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:23:05.045982 coreos-metadata[1946]: Jul 02 00:23:05.045 INFO Fetch successful Jul 2 00:23:05.045982 coreos-metadata[1946]: Jul 02 00:23:05.045 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:23:05.046163 coreos-metadata[1946]: Jul 02 00:23:05.046 INFO Fetch successful Jul 2 00:23:05.103868 systemd-logind[1957]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:23:05.132594 systemd-logind[1957]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 00:23:05.132808 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:05.136219 systemd-logind[1957]: New seat seat0. Jul 2 00:23:05.146233 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:05.201680 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:23:05.246132 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:23:05.246132 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:23:05.246132 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:23:05.277713 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1736) Jul 2 00:23:05.277819 extend-filesystems[1949]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:23:05.347293 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:23:05.350000 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:05.367661 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:05.369880 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:05.398374 bash[2054]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:05.419293 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:05.454237 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:05.474510 systemd[1]: Starting sshkeys.service... Jul 2 00:23:05.604195 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:23:05.616634 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:23:05.699606 locksmithd[1990]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:05.748077 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:23:05.748712 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:23:05.750858 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1989 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:05.764031 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:23:05.828867 polkitd[2113]: Started polkitd version 121 Jul 2 00:23:05.849856 sshd_keygen[1988]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:05.884381 polkitd[2113]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:23:05.884678 polkitd[2113]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:23:05.885619 coreos-metadata[2098]: Jul 02 00:23:05.885 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:05.904839 coreos-metadata[2098]: Jul 02 00:23:05.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:23:05.920249 coreos-metadata[2098]: Jul 02 00:23:05.920 INFO Fetch successful Jul 2 00:23:05.920249 coreos-metadata[2098]: Jul 02 00:23:05.920 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:23:05.923243 polkitd[2113]: Finished loading, compiling and executing 2 rules Jul 2 00:23:05.931641 coreos-metadata[2098]: Jul 02 00:23:05.927 INFO Fetch successful Jul 2 00:23:05.940087 unknown[2098]: wrote ssh authorized keys file for user: core Jul 2 00:23:05.940347 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:23:05.941022 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:23:05.950386 polkitd[2113]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:23:05.953340 ntpd[1951]: bind(24) AF_INET6 fe80::407:bff:fec0:8d4d%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:05.953381 ntpd[1951]: unable to create socket on eth0 (6) for fe80::407:bff:fec0:8d4d%2#123 Jul 2 00:23:05.953735 ntpd[1951]: 2 Jul 00:23:05 ntpd[1951]: bind(24) AF_INET6 fe80::407:bff:fec0:8d4d%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:05.953735 ntpd[1951]: 2 Jul 00:23:05 ntpd[1951]: unable to create socket on eth0 (6) for fe80::407:bff:fec0:8d4d%2#123 Jul 2 00:23:05.953735 ntpd[1951]: 2 Jul 00:23:05 ntpd[1951]: failed to init interface for address fe80::407:bff:fec0:8d4d%2 Jul 2 00:23:05.953396 ntpd[1951]: failed to init interface for address fe80::407:bff:fec0:8d4d%2 Jul 2 00:23:05.971513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:05.973470 systemd-networkd[1872]: eth0: Gained IPv6LL Jul 2 00:23:05.982939 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:05.996188 systemd[1]: Started sshd@0-172.31.19.56:22-147.75.109.163:34458.service - OpenSSH per-connection server daemon (147.75.109.163:34458). Jul 2 00:23:05.998823 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:06.005706 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:06.016530 update-ssh-keys[2149]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:06.014062 systemd-hostnamed[1989]: Hostname set to (transient) Jul 2 00:23:06.016322 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:23:06.021127 systemd-resolved[1873]: System hostname changed to 'ip-172-31-19-56'. Jul 2 00:23:06.033447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:06.037757 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:06.040970 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:23:06.058674 systemd[1]: Finished sshkeys.service. Jul 2 00:23:06.081085 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:06.081338 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:06.086160 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:06.152364 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:06.164562 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:06.170013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:23:06.171756 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:06.244812 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:06.250168 amazon-ssm-agent[2156]: Initializing new seelog logger Jul 2 00:23:06.254571 amazon-ssm-agent[2156]: New Seelog Logger Creation Complete Jul 2 00:23:06.254571 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.254571 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.254571 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 processing appconfig overrides Jul 2 00:23:06.266824 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.266824 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.266824 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 processing appconfig overrides Jul 2 00:23:06.267529 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.267529 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.267529 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 processing appconfig overrides Jul 2 00:23:06.268260 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO Proxy environment variables: Jul 2 00:23:06.286088 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.286252 amazon-ssm-agent[2156]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:06.286453 amazon-ssm-agent[2156]: 2024/07/02 00:23:06 processing appconfig overrides Jul 2 00:23:06.313206 sshd[2153]: Accepted publickey for core from 147.75.109.163 port 34458 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:06.321118 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:06.370005 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO https_proxy: Jul 2 00:23:06.372569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:06.387842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:06.413195 systemd-logind[1957]: New session 1 of user core. Jul 2 00:23:06.440400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:06.454692 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:06.470636 (systemd)[2186]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:06.475686 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO http_proxy: Jul 2 00:23:06.517391 containerd[1977]: time="2024-07-02T00:23:06.517271149Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:06.582770 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO no_proxy: Jul 2 00:23:06.680744 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:23:06.765031 systemd[2186]: Queued start job for default target default.target. Jul 2 00:23:06.771910 systemd[2186]: Created slice app.slice - User Application Slice. Jul 2 00:23:06.771962 systemd[2186]: Reached target paths.target - Paths. Jul 2 00:23:06.771983 systemd[2186]: Reached target timers.target - Timers. Jul 2 00:23:06.778244 systemd[2186]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:06.781068 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:23:06.815554 containerd[1977]: time="2024-07-02T00:23:06.811751133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:06.815554 containerd[1977]: time="2024-07-02T00:23:06.811810377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.809462 systemd[2186]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:06.809549 systemd[2186]: Reached target sockets.target - Sockets. Jul 2 00:23:06.809570 systemd[2186]: Reached target basic.target - Basic System. Jul 2 00:23:06.809630 systemd[2186]: Reached target default.target - Main User Target. Jul 2 00:23:06.809669 systemd[2186]: Startup finished in 320ms. Jul 2 00:23:06.809933 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:06.825160 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.817073805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.817121797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.817648507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.817708776Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.818545980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.818890477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.818914912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.819016311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.819287218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.819311023Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:06.826673 containerd[1977]: time="2024-07-02T00:23:06.819326449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:06.829839 containerd[1977]: time="2024-07-02T00:23:06.819827016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:06.829839 containerd[1977]: time="2024-07-02T00:23:06.819860212Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:06.829839 containerd[1977]: time="2024-07-02T00:23:06.819965924Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:06.829839 containerd[1977]: time="2024-07-02T00:23:06.819989607Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:06.836331 containerd[1977]: time="2024-07-02T00:23:06.836285737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:06.836563 containerd[1977]: time="2024-07-02T00:23:06.836540079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:06.836684 containerd[1977]: time="2024-07-02T00:23:06.836667114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:06.836949 containerd[1977]: time="2024-07-02T00:23:06.836912517Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837105350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837126427Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837145739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837375991Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837416027Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837437184Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837457867Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837494516Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837654285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837680671Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837707777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837728539Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837751361Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.839201 containerd[1977]: time="2024-07-02T00:23:06.837771441Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.837789108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838036119Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838374470Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838462209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838486412Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838518997Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838588362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838607099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838625527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838643195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838662856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838682992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838703136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840161 containerd[1977]: time="2024-07-02T00:23:06.838721467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840810 containerd[1977]: time="2024-07-02T00:23:06.838740477Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:06.840810 containerd[1977]: time="2024-07-02T00:23:06.838964288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.840810 containerd[1977]: time="2024-07-02T00:23:06.838990241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.842959116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.843091959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.843119171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.843161692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.843184669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867474 containerd[1977]: time="2024-07-02T00:23:06.843656060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.862786095Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.862925486Z" level=info msg="Connect containerd service" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.862990028Z" level=info msg="using legacy CRI server" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.863000936Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.863170636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.867633002Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:06.867785 containerd[1977]: time="2024-07-02T00:23:06.867735103Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.867813432Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.869419729Z" level=info msg="Start recovering state" Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.869532594Z" level=info msg="Start event monitor" Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.869556381Z" level=info msg="Start snapshots syncer" Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.869571691Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:06.869752 containerd[1977]: time="2024-07-02T00:23:06.869582957Z" level=info msg="Start streaming server" Jul 2 00:23:06.873570 containerd[1977]: time="2024-07-02T00:23:06.869996351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:06.876167 containerd[1977]: time="2024-07-02T00:23:06.876022541Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:06.880718 containerd[1977]: time="2024-07-02T00:23:06.876190766Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:06.880718 containerd[1977]: time="2024-07-02T00:23:06.877239425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:06.880718 containerd[1977]: time="2024-07-02T00:23:06.877343240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:06.880718 containerd[1977]: time="2024-07-02T00:23:06.878384598Z" level=info msg="containerd successfully booted in 0.378940s" Jul 2 00:23:06.878238 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:06.884526 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO Agent will take identity from EC2 Jul 2 00:23:06.987918 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:07.027671 systemd[1]: Started sshd@1-172.31.19.56:22-147.75.109.163:34464.service - OpenSSH per-connection server daemon (147.75.109.163:34464). Jul 2 00:23:07.089158 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:07.188314 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [Registrar] Starting registrar module Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:07 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:07 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:07 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:23:07.288367 amazon-ssm-agent[2156]: 2024-07-02 00:23:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:23:07.289872 amazon-ssm-agent[2156]: 2024-07-02 00:23:07 INFO [CredentialRefresher] Next credential rotation will be in 30.724979994599998 minutes Jul 2 00:23:07.331106 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 34464 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:07.335550 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:07.350714 systemd-logind[1957]: New session 2 of user core. Jul 2 00:23:07.356158 tar[1964]: linux-amd64/LICENSE Jul 2 00:23:07.359549 tar[1964]: linux-amd64/README.md Jul 2 00:23:07.358296 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:07.388780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:23:07.492876 sshd[2200]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:07.499644 systemd[1]: sshd@1-172.31.19.56:22-147.75.109.163:34464.service: Deactivated successfully. Jul 2 00:23:07.501866 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:23:07.507075 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:23:07.509795 systemd-logind[1957]: Removed session 2. Jul 2 00:23:07.535100 systemd[1]: Started sshd@2-172.31.19.56:22-147.75.109.163:34474.service - OpenSSH per-connection server daemon (147.75.109.163:34474). Jul 2 00:23:07.707904 sshd[2212]: Accepted publickey for core from 147.75.109.163 port 34474 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:07.711661 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:07.720274 systemd-logind[1957]: New session 3 of user core. Jul 2 00:23:07.728329 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:07.860078 sshd[2212]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:07.867521 systemd[1]: sshd@2-172.31.19.56:22-147.75.109.163:34474.service: Deactivated successfully. Jul 2 00:23:07.871802 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:23:07.872637 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:23:07.874285 systemd-logind[1957]: Removed session 3. Jul 2 00:23:08.306292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:08.314718 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:08.317195 systemd[1]: Startup finished in 917ms (kernel) + 9.403s (initrd) + 10.204s (userspace) = 20.526s. Jul 2 00:23:08.326431 amazon-ssm-agent[2156]: 2024-07-02 00:23:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:23:08.431789 amazon-ssm-agent[2156]: 2024-07-02 00:23:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2225) started Jul 2 00:23:08.492670 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:08.534226 amazon-ssm-agent[2156]: 2024-07-02 00:23:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:23:08.952612 ntpd[1951]: Listen normally on 7 eth0 [fe80::407:bff:fec0:8d4d%2]:123 Jul 2 00:23:08.952974 ntpd[1951]: 2 Jul 00:23:08 ntpd[1951]: Listen normally on 7 eth0 [fe80::407:bff:fec0:8d4d%2]:123 Jul 2 00:23:09.760688 kubelet[2223]: E0702 00:23:09.760020 2223 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:09.764137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:09.764423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:09.764785 systemd[1]: kubelet.service: Consumed 1.116s CPU time. Jul 2 00:23:17.887319 systemd[1]: Started sshd@3-172.31.19.56:22-147.75.109.163:33560.service - OpenSSH per-connection server daemon (147.75.109.163:33560). Jul 2 00:23:18.080581 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 33560 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:18.082540 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:18.092091 systemd-logind[1957]: New session 4 of user core. Jul 2 00:23:18.108317 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:18.240088 sshd[2248]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:18.246297 systemd[1]: sshd@3-172.31.19.56:22-147.75.109.163:33560.service: Deactivated successfully. Jul 2 00:23:18.249676 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:18.253604 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:18.255194 systemd-logind[1957]: Removed session 4. Jul 2 00:23:18.278764 systemd[1]: Started sshd@4-172.31.19.56:22-147.75.109.163:33564.service - OpenSSH per-connection server daemon (147.75.109.163:33564). Jul 2 00:23:18.461188 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 33564 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:18.462819 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:18.474281 systemd-logind[1957]: New session 5 of user core. Jul 2 00:23:18.479338 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:18.598286 sshd[2255]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:18.604758 systemd[1]: sshd@4-172.31.19.56:22-147.75.109.163:33564.service: Deactivated successfully. Jul 2 00:23:18.608476 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:18.609304 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:18.610492 systemd-logind[1957]: Removed session 5. Jul 2 00:23:18.629533 systemd[1]: Started sshd@5-172.31.19.56:22-147.75.109.163:33574.service - OpenSSH per-connection server daemon (147.75.109.163:33574). Jul 2 00:23:18.788249 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 33574 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:18.790035 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:18.795566 systemd-logind[1957]: New session 6 of user core. Jul 2 00:23:18.803389 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:18.921554 sshd[2262]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:18.926441 systemd[1]: sshd@5-172.31.19.56:22-147.75.109.163:33574.service: Deactivated successfully. Jul 2 00:23:18.929447 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:18.930580 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:18.931961 systemd-logind[1957]: Removed session 6. Jul 2 00:23:18.955332 systemd[1]: Started sshd@6-172.31.19.56:22-147.75.109.163:33576.service - OpenSSH per-connection server daemon (147.75.109.163:33576). Jul 2 00:23:19.126461 sshd[2269]: Accepted publickey for core from 147.75.109.163 port 33576 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:19.128561 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:19.135767 systemd-logind[1957]: New session 7 of user core. Jul 2 00:23:19.147566 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:19.280503 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:19.282384 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:19.309562 sudo[2272]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:19.333313 sshd[2269]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:19.339228 systemd[1]: sshd@6-172.31.19.56:22-147.75.109.163:33576.service: Deactivated successfully. Jul 2 00:23:19.341650 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:23:19.343960 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:23:19.346146 systemd-logind[1957]: Removed session 7. Jul 2 00:23:19.375474 systemd[1]: Started sshd@7-172.31.19.56:22-147.75.109.163:33578.service - OpenSSH per-connection server daemon (147.75.109.163:33578). Jul 2 00:23:19.536900 sshd[2277]: Accepted publickey for core from 147.75.109.163 port 33578 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:19.538678 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:19.549839 systemd-logind[1957]: New session 8 of user core. Jul 2 00:23:19.559694 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:23:19.659653 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:19.660243 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:19.665178 sudo[2281]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:19.674980 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:19.675426 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:19.702905 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:19.706121 auditctl[2284]: No rules Jul 2 00:23:19.707563 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:19.707894 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:19.718445 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:19.753965 augenrules[2302]: No rules Jul 2 00:23:19.756605 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:19.758375 sudo[2280]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:19.781812 sshd[2277]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:19.790278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:19.791238 systemd[1]: sshd@7-172.31.19.56:22-147.75.109.163:33578.service: Deactivated successfully. Jul 2 00:23:19.793915 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:23:19.797601 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:23:19.804983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:19.848492 systemd[1]: Started sshd@8-172.31.19.56:22-147.75.109.163:33588.service - OpenSSH per-connection server daemon (147.75.109.163:33588). Jul 2 00:23:19.853215 systemd-logind[1957]: Removed session 8. Jul 2 00:23:20.018240 sshd[2313]: Accepted publickey for core from 147.75.109.163 port 33588 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:20.019841 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:20.026420 systemd-logind[1957]: New session 9 of user core. Jul 2 00:23:20.030237 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:23:20.133910 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:20.134493 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:20.476257 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:23:20.501085 (dockerd)[2325]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:23:20.649464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:20.670621 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:20.815511 kubelet[2331]: E0702 00:23:20.815357 2331 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:20.822432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:20.822614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:21.288472 dockerd[2325]: time="2024-07-02T00:23:21.288267614Z" level=info msg="Starting up" Jul 2 00:23:21.380608 dockerd[2325]: time="2024-07-02T00:23:21.380300258Z" level=info msg="Loading containers: start." Jul 2 00:23:21.579291 kernel: Initializing XFRM netlink socket Jul 2 00:23:21.672729 (udev-worker)[2351]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:21.760460 systemd-networkd[1872]: docker0: Link UP Jul 2 00:23:21.785342 dockerd[2325]: time="2024-07-02T00:23:21.785132545Z" level=info msg="Loading containers: done." Jul 2 00:23:21.985358 dockerd[2325]: time="2024-07-02T00:23:21.985317114Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:23:21.985718 dockerd[2325]: time="2024-07-02T00:23:21.985576925Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:23:21.986016 dockerd[2325]: time="2024-07-02T00:23:21.985935724Z" level=info msg="Daemon has completed initialization" Jul 2 00:23:22.091474 dockerd[2325]: time="2024-07-02T00:23:22.091267984Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:23:22.092358 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:23:23.387401 containerd[1977]: time="2024-07-02T00:23:23.387348963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:23:24.098468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3892275329.mount: Deactivated successfully. Jul 2 00:23:27.955722 containerd[1977]: time="2024-07-02T00:23:27.955663295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:27.957488 containerd[1977]: time="2024-07-02T00:23:27.957250486Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jul 2 00:23:27.960929 containerd[1977]: time="2024-07-02T00:23:27.960856278Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:27.977177 containerd[1977]: time="2024-07-02T00:23:27.977090334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:27.979030 containerd[1977]: time="2024-07-02T00:23:27.978626081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 4.591232564s" Jul 2 00:23:27.979030 containerd[1977]: time="2024-07-02T00:23:27.978777512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:23:28.022701 containerd[1977]: time="2024-07-02T00:23:28.022651580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:23:30.985674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:23:31.000553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:31.528089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:31.543515 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:31.731394 kubelet[2541]: E0702 00:23:31.731301 2541 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:31.736474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:31.736670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:31.987909 containerd[1977]: time="2024-07-02T00:23:31.987821973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:31.989772 containerd[1977]: time="2024-07-02T00:23:31.989570499Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jul 2 00:23:31.995072 containerd[1977]: time="2024-07-02T00:23:31.991931077Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:31.997262 containerd[1977]: time="2024-07-02T00:23:31.997159075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:31.998475 containerd[1977]: time="2024-07-02T00:23:31.998430197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 3.975737099s" Jul 2 00:23:31.998595 containerd[1977]: time="2024-07-02T00:23:31.998480792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:23:32.030635 containerd[1977]: time="2024-07-02T00:23:32.030600250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:23:34.218341 containerd[1977]: time="2024-07-02T00:23:34.218280830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:34.220245 containerd[1977]: time="2024-07-02T00:23:34.220059181Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jul 2 00:23:34.222181 containerd[1977]: time="2024-07-02T00:23:34.222141329Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:34.233073 containerd[1977]: time="2024-07-02T00:23:34.232153194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:34.233956 containerd[1977]: time="2024-07-02T00:23:34.233647780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 2.203002582s" Jul 2 00:23:34.233956 containerd[1977]: time="2024-07-02T00:23:34.233694780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:23:34.263912 containerd[1977]: time="2024-07-02T00:23:34.263873056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:23:35.772837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363952616.mount: Deactivated successfully. Jul 2 00:23:36.051170 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:23:36.477000 containerd[1977]: time="2024-07-02T00:23:36.476826244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:36.479096 containerd[1977]: time="2024-07-02T00:23:36.478809685Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jul 2 00:23:36.481285 containerd[1977]: time="2024-07-02T00:23:36.480984892Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:36.484409 containerd[1977]: time="2024-07-02T00:23:36.484343468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:36.485646 containerd[1977]: time="2024-07-02T00:23:36.485098551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.221183999s" Jul 2 00:23:36.485646 containerd[1977]: time="2024-07-02T00:23:36.485150761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:23:36.516222 containerd[1977]: time="2024-07-02T00:23:36.516183876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:23:37.189156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182447179.mount: Deactivated successfully. Jul 2 00:23:38.790270 containerd[1977]: time="2024-07-02T00:23:38.790132296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:38.793156 containerd[1977]: time="2024-07-02T00:23:38.792885185Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:23:38.800785 containerd[1977]: time="2024-07-02T00:23:38.800699316Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:38.810971 containerd[1977]: time="2024-07-02T00:23:38.810897807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:38.813101 containerd[1977]: time="2024-07-02T00:23:38.813033008Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.296802338s" Jul 2 00:23:38.813226 containerd[1977]: time="2024-07-02T00:23:38.813108421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:23:38.875062 containerd[1977]: time="2024-07-02T00:23:38.873274736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:23:39.439562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735068225.mount: Deactivated successfully. Jul 2 00:23:39.450738 containerd[1977]: time="2024-07-02T00:23:39.450626342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:39.453731 containerd[1977]: time="2024-07-02T00:23:39.453344825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:23:39.457183 containerd[1977]: time="2024-07-02T00:23:39.454946084Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:39.471569 containerd[1977]: time="2024-07-02T00:23:39.467431760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:39.471569 containerd[1977]: time="2024-07-02T00:23:39.470359797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 596.818432ms" Jul 2 00:23:39.471569 containerd[1977]: time="2024-07-02T00:23:39.470405770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:23:39.505700 containerd[1977]: time="2024-07-02T00:23:39.505669751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:23:40.085419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748286961.mount: Deactivated successfully. Jul 2 00:23:41.986116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:23:41.995418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:42.834371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:42.844958 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:42.998382 kubelet[2688]: E0702 00:23:42.998323 2688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:43.004865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:43.005362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:43.934288 containerd[1977]: time="2024-07-02T00:23:43.934148789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:43.935701 containerd[1977]: time="2024-07-02T00:23:43.935645140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:23:43.939787 containerd[1977]: time="2024-07-02T00:23:43.938079894Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:43.944837 containerd[1977]: time="2024-07-02T00:23:43.944767095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:43.946378 containerd[1977]: time="2024-07-02T00:23:43.946205360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.440323817s" Jul 2 00:23:43.946378 containerd[1977]: time="2024-07-02T00:23:43.946256743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:23:47.232582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:47.246481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:47.292641 systemd[1]: Reloading requested from client PID 2764 ('systemctl') (unit session-9.scope)... Jul 2 00:23:47.292831 systemd[1]: Reloading... Jul 2 00:23:47.488076 zram_generator::config[2802]: No configuration found. Jul 2 00:23:47.692503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:47.804875 systemd[1]: Reloading finished in 511 ms. Jul 2 00:23:47.872650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:23:47.872747 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:23:47.873593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:47.879780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:48.329364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:48.333023 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:23:48.426955 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:48.427355 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:23:48.427355 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:48.427355 kubelet[2862]: I0702 00:23:48.427189 2862 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:23:48.857884 kubelet[2862]: I0702 00:23:48.857845 2862 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:23:48.857884 kubelet[2862]: I0702 00:23:48.857878 2862 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:23:48.858282 kubelet[2862]: I0702 00:23:48.858192 2862 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:23:48.897396 kubelet[2862]: E0702 00:23:48.897319 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.897740 kubelet[2862]: I0702 00:23:48.897595 2862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:23:48.915295 kubelet[2862]: I0702 00:23:48.915252 2862 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:23:48.915541 kubelet[2862]: I0702 00:23:48.915523 2862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:23:48.917323 kubelet[2862]: I0702 00:23:48.917271 2862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:23:48.918042 kubelet[2862]: I0702 00:23:48.918018 2862 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:23:48.918131 kubelet[2862]: I0702 00:23:48.918065 2862 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:23:48.918233 kubelet[2862]: I0702 00:23:48.918210 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:48.919256 kubelet[2862]: I0702 00:23:48.919234 2862 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:23:48.919373 kubelet[2862]: I0702 00:23:48.919303 2862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:23:48.919373 kubelet[2862]: I0702 00:23:48.919342 2862 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:23:48.919373 kubelet[2862]: I0702 00:23:48.919365 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:23:48.922974 kubelet[2862]: W0702 00:23:48.922544 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-56&limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.922974 kubelet[2862]: E0702 00:23:48.922611 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-56&limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.922974 kubelet[2862]: W0702 00:23:48.922747 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.922974 kubelet[2862]: E0702 00:23:48.922801 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.923220 kubelet[2862]: I0702 00:23:48.923085 2862 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:23:48.932075 kubelet[2862]: I0702 00:23:48.930146 2862 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:23:48.932075 kubelet[2862]: W0702 00:23:48.930238 2862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:23:48.934955 kubelet[2862]: I0702 00:23:48.934925 2862 server.go:1256] "Started kubelet" Jul 2 00:23:48.936337 kubelet[2862]: I0702 00:23:48.936309 2862 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:23:48.936732 kubelet[2862]: I0702 00:23:48.936712 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:23:48.937516 kubelet[2862]: I0702 00:23:48.937494 2862 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:23:48.939637 kubelet[2862]: I0702 00:23:48.939612 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:23:48.939859 kubelet[2862]: I0702 00:23:48.939837 2862 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:23:48.947321 kubelet[2862]: I0702 00:23:48.947293 2862 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:23:48.954723 kubelet[2862]: I0702 00:23:48.954358 2862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:23:48.959312 kubelet[2862]: E0702 00:23:48.954766 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.56:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.56:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-56.17de3d9794061bb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-56,UID:ip-172-31-19-56,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-56,},FirstTimestamp:2024-07-02 00:23:48.934892464 +0000 UTC m=+0.593092614,LastTimestamp:2024-07-02 00:23:48.934892464 +0000 UTC m=+0.593092614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-56,}" Jul 2 00:23:48.959312 kubelet[2862]: E0702 00:23:48.954905 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-56?timeout=10s\": dial tcp 172.31.19.56:6443: connect: connection refused" interval="200ms" Jul 2 00:23:48.959312 kubelet[2862]: I0702 00:23:48.954970 2862 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:23:48.959312 kubelet[2862]: W0702 00:23:48.957263 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.959312 kubelet[2862]: E0702 00:23:48.957323 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.965790 kubelet[2862]: I0702 00:23:48.965749 2862 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:23:48.967178 kubelet[2862]: I0702 00:23:48.965903 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:23:48.971864 kubelet[2862]: I0702 00:23:48.971827 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:23:48.972597 kubelet[2862]: I0702 00:23:48.972135 2862 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:23:48.976374 kubelet[2862]: I0702 00:23:48.976267 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:23:48.976374 kubelet[2862]: I0702 00:23:48.976305 2862 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:23:48.976374 kubelet[2862]: I0702 00:23:48.976325 2862 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:23:48.976603 kubelet[2862]: E0702 00:23:48.976382 2862 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:23:48.982000 kubelet[2862]: W0702 00:23:48.981895 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:48.984649 kubelet[2862]: E0702 00:23:48.984625 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:49.005508 kubelet[2862]: I0702 00:23:49.005396 2862 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:23:49.005839 kubelet[2862]: I0702 00:23:49.005824 2862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:23:49.005950 kubelet[2862]: I0702 00:23:49.005929 2862 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:49.011226 kubelet[2862]: I0702 00:23:49.011200 2862 policy_none.go:49] "None policy: Start" Jul 2 00:23:49.012304 kubelet[2862]: I0702 00:23:49.012287 2862 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:23:49.012764 kubelet[2862]: I0702 00:23:49.012427 2862 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:23:49.023888 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:23:49.038529 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:23:49.048229 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:23:49.051487 kubelet[2862]: I0702 00:23:49.051467 2862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:23:49.052237 kubelet[2862]: I0702 00:23:49.051805 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:23:49.053252 kubelet[2862]: I0702 00:23:49.053175 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:49.055060 kubelet[2862]: E0702 00:23:49.054952 2862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-56\" not found" Jul 2 00:23:49.055669 kubelet[2862]: E0702 00:23:49.055613 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.56:6443/api/v1/nodes\": dial tcp 172.31.19.56:6443: connect: connection refused" node="ip-172-31-19-56" Jul 2 00:23:49.077570 kubelet[2862]: I0702 00:23:49.077524 2862 topology_manager.go:215] "Topology Admit Handler" podUID="b450b3a3ad0b2b05bd8966f0d7098a1a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-56" Jul 2 00:23:49.079231 kubelet[2862]: I0702 00:23:49.079194 2862 topology_manager.go:215] "Topology Admit Handler" podUID="1ef32260d8b550fb87a0efcc065ee719" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.081683 kubelet[2862]: I0702 00:23:49.080732 2862 topology_manager.go:215] "Topology Admit Handler" podUID="d40707d42f2e1d2a033e37ee0c05facc" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-56" Jul 2 00:23:49.090549 systemd[1]: Created slice kubepods-burstable-podb450b3a3ad0b2b05bd8966f0d7098a1a.slice - libcontainer container kubepods-burstable-podb450b3a3ad0b2b05bd8966f0d7098a1a.slice. Jul 2 00:23:49.117159 systemd[1]: Created slice kubepods-burstable-pod1ef32260d8b550fb87a0efcc065ee719.slice - libcontainer container kubepods-burstable-pod1ef32260d8b550fb87a0efcc065ee719.slice. Jul 2 00:23:49.128220 systemd[1]: Created slice kubepods-burstable-podd40707d42f2e1d2a033e37ee0c05facc.slice - libcontainer container kubepods-burstable-podd40707d42f2e1d2a033e37ee0c05facc.slice. Jul 2 00:23:49.157795 kubelet[2862]: I0702 00:23:49.157747 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.157795 kubelet[2862]: I0702 00:23:49.157797 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:49.158026 kubelet[2862]: I0702 00:23:49.157827 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:49.158026 kubelet[2862]: I0702 00:23:49.157852 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.158026 kubelet[2862]: I0702 00:23:49.157879 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.158026 kubelet[2862]: I0702 00:23:49.157915 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.158026 kubelet[2862]: I0702 00:23:49.157955 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:49.158258 kubelet[2862]: I0702 00:23:49.157987 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d40707d42f2e1d2a033e37ee0c05facc-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-56\" (UID: \"d40707d42f2e1d2a033e37ee0c05facc\") " pod="kube-system/kube-scheduler-ip-172-31-19-56" Jul 2 00:23:49.158258 kubelet[2862]: I0702 00:23:49.158019 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:49.158452 kubelet[2862]: E0702 00:23:49.158367 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-56?timeout=10s\": dial tcp 172.31.19.56:6443: connect: connection refused" interval="400ms" Jul 2 00:23:49.257670 kubelet[2862]: I0702 00:23:49.257610 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:49.258020 kubelet[2862]: E0702 00:23:49.257997 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.56:6443/api/v1/nodes\": dial tcp 172.31.19.56:6443: connect: connection refused" node="ip-172-31-19-56" Jul 2 00:23:49.415002 containerd[1977]: time="2024-07-02T00:23:49.414875331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-56,Uid:b450b3a3ad0b2b05bd8966f0d7098a1a,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:49.426728 containerd[1977]: time="2024-07-02T00:23:49.426688454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-56,Uid:1ef32260d8b550fb87a0efcc065ee719,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:49.431886 containerd[1977]: time="2024-07-02T00:23:49.431834409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-56,Uid:d40707d42f2e1d2a033e37ee0c05facc,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:49.561119 kubelet[2862]: E0702 00:23:49.559797 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-56?timeout=10s\": dial tcp 172.31.19.56:6443: connect: connection refused" interval="800ms" Jul 2 00:23:49.659831 kubelet[2862]: I0702 00:23:49.659805 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:49.660262 kubelet[2862]: E0702 00:23:49.660238 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.56:6443/api/v1/nodes\": dial tcp 172.31.19.56:6443: connect: connection refused" node="ip-172-31-19-56" Jul 2 00:23:49.792204 update_engine[1958]: I0702 00:23:49.792093 1958 update_attempter.cc:509] Updating boot flags... Jul 2 00:23:49.863732 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2906) Jul 2 00:23:49.992521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237158383.mount: Deactivated successfully. Jul 2 00:23:49.997210 kubelet[2862]: W0702 00:23:49.996752 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-56&limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:49.997210 kubelet[2862]: E0702 00:23:49.996884 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-56&limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.025074 containerd[1977]: time="2024-07-02T00:23:50.015224056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:50.074369 containerd[1977]: time="2024-07-02T00:23:50.074237521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:23:50.083664 containerd[1977]: time="2024-07-02T00:23:50.081497234Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:50.107131 containerd[1977]: time="2024-07-02T00:23:50.105856430Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:50.107131 containerd[1977]: time="2024-07-02T00:23:50.106291501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:23:50.108079 kubelet[2862]: W0702 00:23:50.107448 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.108079 kubelet[2862]: E0702 00:23:50.107522 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.112080 containerd[1977]: time="2024-07-02T00:23:50.111379080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:50.112080 containerd[1977]: time="2024-07-02T00:23:50.112035569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:23:50.114281 containerd[1977]: time="2024-07-02T00:23:50.114237678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:50.120119 containerd[1977]: time="2024-07-02T00:23:50.118958636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.042383ms" Jul 2 00:23:50.128065 containerd[1977]: time="2024-07-02T00:23:50.127380715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.376441ms" Jul 2 00:23:50.132325 containerd[1977]: time="2024-07-02T00:23:50.132218667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.4272ms" Jul 2 00:23:50.154691 kubelet[2862]: W0702 00:23:50.154578 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.154691 kubelet[2862]: E0702 00:23:50.154679 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.218451 kubelet[2862]: W0702 00:23:50.217545 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.218451 kubelet[2862]: E0702 00:23:50.217786 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:50.221166 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2909) Jul 2 00:23:50.360598 kubelet[2862]: E0702 00:23:50.360363 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-56?timeout=10s\": dial tcp 172.31.19.56:6443: connect: connection refused" interval="1.6s" Jul 2 00:23:50.509551 kubelet[2862]: I0702 00:23:50.509506 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:50.523924 kubelet[2862]: E0702 00:23:50.523888 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.56:6443/api/v1/nodes\": dial tcp 172.31.19.56:6443: connect: connection refused" node="ip-172-31-19-56" Jul 2 00:23:50.703524 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2909) Jul 2 00:23:50.774038 containerd[1977]: time="2024-07-02T00:23:50.773488867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:50.774634 containerd[1977]: time="2024-07-02T00:23:50.774134224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.774634 containerd[1977]: time="2024-07-02T00:23:50.774229179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:50.774634 containerd[1977]: time="2024-07-02T00:23:50.774396917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.806005 containerd[1977]: time="2024-07-02T00:23:50.805868942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:50.806005 containerd[1977]: time="2024-07-02T00:23:50.805945329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.806253 containerd[1977]: time="2024-07-02T00:23:50.805976082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:50.806253 containerd[1977]: time="2024-07-02T00:23:50.805995771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.837276 containerd[1977]: time="2024-07-02T00:23:50.827061857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:50.837276 containerd[1977]: time="2024-07-02T00:23:50.827122476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.837276 containerd[1977]: time="2024-07-02T00:23:50.827146470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:50.837276 containerd[1977]: time="2024-07-02T00:23:50.827162331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:50.902485 systemd[1]: Started cri-containerd-a081d22adc7985ac42a4844d47118c0f508f444c081a9ed1dc6307bd84760f59.scope - libcontainer container a081d22adc7985ac42a4844d47118c0f508f444c081a9ed1dc6307bd84760f59. Jul 2 00:23:50.929372 systemd[1]: Started cri-containerd-d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5.scope - libcontainer container d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5. Jul 2 00:23:51.017352 systemd[1]: Started cri-containerd-561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887.scope - libcontainer container 561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887. Jul 2 00:23:51.072145 kubelet[2862]: E0702 00:23:51.072000 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:51.164534 containerd[1977]: time="2024-07-02T00:23:51.164357190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-56,Uid:d40707d42f2e1d2a033e37ee0c05facc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5\"" Jul 2 00:23:51.191190 containerd[1977]: time="2024-07-02T00:23:51.191139067Z" level=info msg="CreateContainer within sandbox \"d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:23:51.191770 containerd[1977]: time="2024-07-02T00:23:51.191736891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-56,Uid:b450b3a3ad0b2b05bd8966f0d7098a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a081d22adc7985ac42a4844d47118c0f508f444c081a9ed1dc6307bd84760f59\"" Jul 2 00:23:51.194434 containerd[1977]: time="2024-07-02T00:23:51.194386676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-56,Uid:1ef32260d8b550fb87a0efcc065ee719,Namespace:kube-system,Attempt:0,} returns sandbox id \"561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887\"" Jul 2 00:23:51.201153 containerd[1977]: time="2024-07-02T00:23:51.201003287Z" level=info msg="CreateContainer within sandbox \"561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:23:51.201356 containerd[1977]: time="2024-07-02T00:23:51.201327672Z" level=info msg="CreateContainer within sandbox \"a081d22adc7985ac42a4844d47118c0f508f444c081a9ed1dc6307bd84760f59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:23:51.237274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993938654.mount: Deactivated successfully. Jul 2 00:23:51.288445 containerd[1977]: time="2024-07-02T00:23:51.288248525Z" level=info msg="CreateContainer within sandbox \"d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6\"" Jul 2 00:23:51.289319 containerd[1977]: time="2024-07-02T00:23:51.289273989Z" level=info msg="StartContainer for \"adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6\"" Jul 2 00:23:51.299455 containerd[1977]: time="2024-07-02T00:23:51.299331955Z" level=info msg="CreateContainer within sandbox \"561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d\"" Jul 2 00:23:51.301819 containerd[1977]: time="2024-07-02T00:23:51.301780289Z" level=info msg="StartContainer for \"77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d\"" Jul 2 00:23:51.313031 containerd[1977]: time="2024-07-02T00:23:51.312983314Z" level=info msg="CreateContainer within sandbox \"a081d22adc7985ac42a4844d47118c0f508f444c081a9ed1dc6307bd84760f59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0fdbc17f08c571762e2a2565eb1d8b03d329329c72a1bf3a4c2717db3de47e9c\"" Jul 2 00:23:51.314356 containerd[1977]: time="2024-07-02T00:23:51.314320659Z" level=info msg="StartContainer for \"0fdbc17f08c571762e2a2565eb1d8b03d329329c72a1bf3a4c2717db3de47e9c\"" Jul 2 00:23:51.361590 systemd[1]: Started cri-containerd-adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6.scope - libcontainer container adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6. Jul 2 00:23:51.403337 systemd[1]: Started cri-containerd-77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d.scope - libcontainer container 77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d. Jul 2 00:23:51.440349 systemd[1]: Started cri-containerd-0fdbc17f08c571762e2a2565eb1d8b03d329329c72a1bf3a4c2717db3de47e9c.scope - libcontainer container 0fdbc17f08c571762e2a2565eb1d8b03d329329c72a1bf3a4c2717db3de47e9c. Jul 2 00:23:51.545148 containerd[1977]: time="2024-07-02T00:23:51.544039270Z" level=info msg="StartContainer for \"adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6\" returns successfully" Jul 2 00:23:51.582476 containerd[1977]: time="2024-07-02T00:23:51.582432053Z" level=info msg="StartContainer for \"0fdbc17f08c571762e2a2565eb1d8b03d329329c72a1bf3a4c2717db3de47e9c\" returns successfully" Jul 2 00:23:51.601269 containerd[1977]: time="2024-07-02T00:23:51.598509640Z" level=info msg="StartContainer for \"77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d\" returns successfully" Jul 2 00:23:51.794795 kubelet[2862]: E0702 00:23:51.794755 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.56:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.56:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-56.17de3d9794061bb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-56,UID:ip-172-31-19-56,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-56,},FirstTimestamp:2024-07-02 00:23:48.934892464 +0000 UTC m=+0.593092614,LastTimestamp:2024-07-02 00:23:48.934892464 +0000 UTC m=+0.593092614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-56,}" Jul 2 00:23:51.961458 kubelet[2862]: E0702 00:23:51.961391 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-56?timeout=10s\": dial tcp 172.31.19.56:6443: connect: connection refused" interval="3.2s" Jul 2 00:23:52.003815 kubelet[2862]: W0702 00:23:52.003402 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:52.003815 kubelet[2862]: E0702 00:23:52.003476 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.56:6443: connect: connection refused Jul 2 00:23:52.127206 kubelet[2862]: I0702 00:23:52.126638 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:52.127206 kubelet[2862]: E0702 00:23:52.126995 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.56:6443/api/v1/nodes\": dial tcp 172.31.19.56:6443: connect: connection refused" node="ip-172-31-19-56" Jul 2 00:23:54.390771 kubelet[2862]: E0702 00:23:54.390729 2862 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-56" not found Jul 2 00:23:54.763662 kubelet[2862]: E0702 00:23:54.763625 2862 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-56" not found Jul 2 00:23:54.937495 kubelet[2862]: I0702 00:23:54.937263 2862 apiserver.go:52] "Watching apiserver" Jul 2 00:23:54.957454 kubelet[2862]: I0702 00:23:54.957396 2862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:23:55.168453 kubelet[2862]: E0702 00:23:55.168263 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-56\" not found" node="ip-172-31-19-56" Jul 2 00:23:55.204761 kubelet[2862]: E0702 00:23:55.204168 2862 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-56" not found Jul 2 00:23:55.329879 kubelet[2862]: I0702 00:23:55.329847 2862 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:55.348981 kubelet[2862]: I0702 00:23:55.348798 2862 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-56" Jul 2 00:23:57.043491 systemd[1]: Reloading requested from client PID 3405 ('systemctl') (unit session-9.scope)... Jul 2 00:23:57.043514 systemd[1]: Reloading... Jul 2 00:23:57.256113 zram_generator::config[3443]: No configuration found. Jul 2 00:23:57.535449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:57.747604 systemd[1]: Reloading finished in 703 ms. Jul 2 00:23:57.818623 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:57.825804 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:23:57.826162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:57.832581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:58.443417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:58.445643 (kubelet)[3500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:23:58.582754 kubelet[3500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:58.582754 kubelet[3500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:23:58.582754 kubelet[3500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:58.582754 kubelet[3500]: I0702 00:23:58.582189 3500 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:23:58.614997 kubelet[3500]: I0702 00:23:58.614759 3500 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:23:58.614997 kubelet[3500]: I0702 00:23:58.614793 3500 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:23:58.615928 kubelet[3500]: I0702 00:23:58.615615 3500 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:23:58.620107 kubelet[3500]: I0702 00:23:58.619312 3500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:23:58.639493 kubelet[3500]: I0702 00:23:58.639448 3500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:23:58.666811 kubelet[3500]: I0702 00:23:58.666509 3500 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:23:58.667722 kubelet[3500]: I0702 00:23:58.667668 3500 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.668399 3500 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.668442 3500 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.668461 3500 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.668550 3500 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.668851 3500 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.672580 3500 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:23:58.672868 kubelet[3500]: I0702 00:23:58.672625 3500 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:23:58.680130 kubelet[3500]: I0702 00:23:58.672646 3500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:23:58.694704 kubelet[3500]: I0702 00:23:58.694193 3500 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:23:58.694704 kubelet[3500]: I0702 00:23:58.694500 3500 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:23:58.700721 kubelet[3500]: I0702 00:23:58.695810 3500 server.go:1256] "Started kubelet" Jul 2 00:23:58.700721 kubelet[3500]: I0702 00:23:58.697751 3500 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:23:58.700721 kubelet[3500]: I0702 00:23:58.700281 3500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:23:58.702617 kubelet[3500]: I0702 00:23:58.702592 3500 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:23:58.703206 kubelet[3500]: I0702 00:23:58.703180 3500 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:23:58.708319 kubelet[3500]: I0702 00:23:58.708294 3500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:23:58.730497 kubelet[3500]: I0702 00:23:58.730463 3500 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:23:58.731184 kubelet[3500]: I0702 00:23:58.731160 3500 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:23:58.733367 kubelet[3500]: I0702 00:23:58.733345 3500 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:23:58.751919 kubelet[3500]: I0702 00:23:58.751785 3500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:23:58.758280 kubelet[3500]: I0702 00:23:58.758254 3500 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:23:58.759090 kubelet[3500]: I0702 00:23:58.759077 3500 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:23:58.780507 kubelet[3500]: I0702 00:23:58.780478 3500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:23:58.784993 kubelet[3500]: I0702 00:23:58.784959 3500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:23:58.785841 kubelet[3500]: I0702 00:23:58.785237 3500 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:23:58.785841 kubelet[3500]: I0702 00:23:58.785268 3500 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:23:58.785841 kubelet[3500]: E0702 00:23:58.785346 3500 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:23:58.848274 kubelet[3500]: I0702 00:23:58.848236 3500 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-56" Jul 2 00:23:58.875461 kubelet[3500]: I0702 00:23:58.875339 3500 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-56" Jul 2 00:23:58.879998 kubelet[3500]: I0702 00:23:58.877305 3500 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-56" Jul 2 00:23:58.914856 kubelet[3500]: E0702 00:23:58.894966 3500 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:23:59.010258 kubelet[3500]: I0702 00:23:59.010225 3500 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:23:59.010258 kubelet[3500]: I0702 00:23:59.010249 3500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:23:59.010258 kubelet[3500]: I0702 00:23:59.010271 3500 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.010625 3500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.010670 3500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.010680 3500 policy_none.go:49] "None policy: Start" Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.011705 3500 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.011733 3500 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:23:59.012212 kubelet[3500]: I0702 00:23:59.012113 3500 state_mem.go:75] "Updated machine memory state" Jul 2 00:23:59.019232 kubelet[3500]: I0702 00:23:59.019075 3500 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:23:59.020946 kubelet[3500]: I0702 00:23:59.019970 3500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:23:59.096117 kubelet[3500]: I0702 00:23:59.096079 3500 topology_manager.go:215] "Topology Admit Handler" podUID="d40707d42f2e1d2a033e37ee0c05facc" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-56" Jul 2 00:23:59.097172 kubelet[3500]: I0702 00:23:59.096486 3500 topology_manager.go:215] "Topology Admit Handler" podUID="b450b3a3ad0b2b05bd8966f0d7098a1a" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-56" Jul 2 00:23:59.097172 kubelet[3500]: I0702 00:23:59.096584 3500 topology_manager.go:215] "Topology Admit Handler" podUID="1ef32260d8b550fb87a0efcc065ee719" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.140518 kubelet[3500]: I0702 00:23:59.140476 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:59.140736 kubelet[3500]: I0702 00:23:59.140722 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.140884 kubelet[3500]: I0702 00:23:59.140873 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.141002 kubelet[3500]: I0702 00:23:59.140994 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.141186 kubelet[3500]: I0702 00:23:59.141174 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.141562 kubelet[3500]: I0702 00:23:59.141434 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d40707d42f2e1d2a033e37ee0c05facc-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-56\" (UID: \"d40707d42f2e1d2a033e37ee0c05facc\") " pod="kube-system/kube-scheduler-ip-172-31-19-56" Jul 2 00:23:59.141562 kubelet[3500]: I0702 00:23:59.141474 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:59.141875 kubelet[3500]: I0702 00:23:59.141762 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b450b3a3ad0b2b05bd8966f0d7098a1a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-56\" (UID: \"b450b3a3ad0b2b05bd8966f0d7098a1a\") " pod="kube-system/kube-apiserver-ip-172-31-19-56" Jul 2 00:23:59.141875 kubelet[3500]: I0702 00:23:59.141840 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ef32260d8b550fb87a0efcc065ee719-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-56\" (UID: \"1ef32260d8b550fb87a0efcc065ee719\") " pod="kube-system/kube-controller-manager-ip-172-31-19-56" Jul 2 00:23:59.690103 kubelet[3500]: I0702 00:23:59.690036 3500 apiserver.go:52] "Watching apiserver" Jul 2 00:23:59.733900 kubelet[3500]: I0702 00:23:59.733816 3500 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:23:59.931932 kubelet[3500]: E0702 00:23:59.931893 3500 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-56\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-56" Jul 2 00:24:00.132144 kubelet[3500]: I0702 00:24:00.132090 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-56" podStartSLOduration=1.131667665 podStartE2EDuration="1.131667665s" podCreationTimestamp="2024-07-02 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:00.128582704 +0000 UTC m=+1.661197274" watchObservedRunningTime="2024-07-02 00:24:00.131667665 +0000 UTC m=+1.664282225" Jul 2 00:24:00.214661 kubelet[3500]: I0702 00:24:00.214594 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-56" podStartSLOduration=1.214543436 podStartE2EDuration="1.214543436s" podCreationTimestamp="2024-07-02 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:00.181547331 +0000 UTC m=+1.714161901" watchObservedRunningTime="2024-07-02 00:24:00.214543436 +0000 UTC m=+1.747158007" Jul 2 00:24:00.260213 kubelet[3500]: I0702 00:24:00.260036 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-56" podStartSLOduration=1.259987965 podStartE2EDuration="1.259987965s" podCreationTimestamp="2024-07-02 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:00.225215029 +0000 UTC m=+1.757829599" watchObservedRunningTime="2024-07-02 00:24:00.259987965 +0000 UTC m=+1.792602534" Jul 2 00:24:05.373972 sudo[2316]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:05.397903 sshd[2313]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:05.402153 systemd[1]: sshd@8-172.31.19.56:22-147.75.109.163:33588.service: Deactivated successfully. Jul 2 00:24:05.407151 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:24:05.407484 systemd[1]: session-9.scope: Consumed 5.047s CPU time, 133.9M memory peak, 0B memory swap peak. Jul 2 00:24:05.409470 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:24:05.411228 systemd-logind[1957]: Removed session 9. Jul 2 00:24:12.572801 kubelet[3500]: I0702 00:24:12.572761 3500 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:24:12.573441 containerd[1977]: time="2024-07-02T00:24:12.573397715Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:24:12.574133 kubelet[3500]: I0702 00:24:12.574106 3500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:24:12.624782 kubelet[3500]: I0702 00:24:12.624740 3500 topology_manager.go:215] "Topology Admit Handler" podUID="5fb685b2-8fcb-432c-8938-98e1a665cecf" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-k7h2h" Jul 2 00:24:12.665157 systemd[1]: Created slice kubepods-besteffort-pod5fb685b2_8fcb_432c_8938_98e1a665cecf.slice - libcontainer container kubepods-besteffort-pod5fb685b2_8fcb_432c_8938_98e1a665cecf.slice. Jul 2 00:24:12.684334 kubelet[3500]: W0702 00:24:12.684266 3500 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:12.684334 kubelet[3500]: E0702 00:24:12.684308 3500 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:12.685021 kubelet[3500]: W0702 00:24:12.684971 3500 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:12.685021 kubelet[3500]: E0702 00:24:12.684999 3500 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:12.704372 kubelet[3500]: I0702 00:24:12.704260 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5lr\" (UniqueName: \"kubernetes.io/projected/5fb685b2-8fcb-432c-8938-98e1a665cecf-kube-api-access-mq5lr\") pod \"tigera-operator-76c4974c85-k7h2h\" (UID: \"5fb685b2-8fcb-432c-8938-98e1a665cecf\") " pod="tigera-operator/tigera-operator-76c4974c85-k7h2h" Jul 2 00:24:12.704372 kubelet[3500]: I0702 00:24:12.704313 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5fb685b2-8fcb-432c-8938-98e1a665cecf-var-lib-calico\") pod \"tigera-operator-76c4974c85-k7h2h\" (UID: \"5fb685b2-8fcb-432c-8938-98e1a665cecf\") " pod="tigera-operator/tigera-operator-76c4974c85-k7h2h" Jul 2 00:24:12.761869 kubelet[3500]: I0702 00:24:12.761102 3500 topology_manager.go:215] "Topology Admit Handler" podUID="d8ceee77-3e58-4018-bd67-10bea1dbcd93" podNamespace="kube-system" podName="kube-proxy-9jf6q" Jul 2 00:24:12.772295 systemd[1]: Created slice kubepods-besteffort-podd8ceee77_3e58_4018_bd67_10bea1dbcd93.slice - libcontainer container kubepods-besteffort-podd8ceee77_3e58_4018_bd67_10bea1dbcd93.slice. Jul 2 00:24:12.908556 kubelet[3500]: I0702 00:24:12.908393 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8ceee77-3e58-4018-bd67-10bea1dbcd93-kube-proxy\") pod \"kube-proxy-9jf6q\" (UID: \"d8ceee77-3e58-4018-bd67-10bea1dbcd93\") " pod="kube-system/kube-proxy-9jf6q" Jul 2 00:24:12.908556 kubelet[3500]: I0702 00:24:12.908460 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2nth\" (UniqueName: \"kubernetes.io/projected/d8ceee77-3e58-4018-bd67-10bea1dbcd93-kube-api-access-h2nth\") pod \"kube-proxy-9jf6q\" (UID: \"d8ceee77-3e58-4018-bd67-10bea1dbcd93\") " pod="kube-system/kube-proxy-9jf6q" Jul 2 00:24:12.908556 kubelet[3500]: I0702 00:24:12.908499 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ceee77-3e58-4018-bd67-10bea1dbcd93-xtables-lock\") pod \"kube-proxy-9jf6q\" (UID: \"d8ceee77-3e58-4018-bd67-10bea1dbcd93\") " pod="kube-system/kube-proxy-9jf6q" Jul 2 00:24:12.908556 kubelet[3500]: I0702 00:24:12.908528 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ceee77-3e58-4018-bd67-10bea1dbcd93-lib-modules\") pod \"kube-proxy-9jf6q\" (UID: \"d8ceee77-3e58-4018-bd67-10bea1dbcd93\") " pod="kube-system/kube-proxy-9jf6q" Jul 2 00:24:13.081087 containerd[1977]: time="2024-07-02T00:24:13.081016887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jf6q,Uid:d8ceee77-3e58-4018-bd67-10bea1dbcd93,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:13.144770 containerd[1977]: time="2024-07-02T00:24:13.144510692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:13.144770 containerd[1977]: time="2024-07-02T00:24:13.144698423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:13.145638 containerd[1977]: time="2024-07-02T00:24:13.145580912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:13.145638 containerd[1977]: time="2024-07-02T00:24:13.145610600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:13.202282 systemd[1]: Started cri-containerd-d0a15dc2bde1a8ff47e29f080b90074a7b390ee4d59ff35d8a3e9a9dac793665.scope - libcontainer container d0a15dc2bde1a8ff47e29f080b90074a7b390ee4d59ff35d8a3e9a9dac793665. Jul 2 00:24:13.267762 containerd[1977]: time="2024-07-02T00:24:13.267338213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jf6q,Uid:d8ceee77-3e58-4018-bd67-10bea1dbcd93,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a15dc2bde1a8ff47e29f080b90074a7b390ee4d59ff35d8a3e9a9dac793665\"" Jul 2 00:24:13.279375 containerd[1977]: time="2024-07-02T00:24:13.279334999Z" level=info msg="CreateContainer within sandbox \"d0a15dc2bde1a8ff47e29f080b90074a7b390ee4d59ff35d8a3e9a9dac793665\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:24:13.305363 containerd[1977]: time="2024-07-02T00:24:13.305317308Z" level=info msg="CreateContainer within sandbox \"d0a15dc2bde1a8ff47e29f080b90074a7b390ee4d59ff35d8a3e9a9dac793665\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"feae6e2751f9445af86c1ad72f43ec744f20cd30674fcdc989a8dfd35b9f8b48\"" Jul 2 00:24:13.306234 containerd[1977]: time="2024-07-02T00:24:13.306188598Z" level=info msg="StartContainer for \"feae6e2751f9445af86c1ad72f43ec744f20cd30674fcdc989a8dfd35b9f8b48\"" Jul 2 00:24:13.349099 systemd[1]: Started cri-containerd-feae6e2751f9445af86c1ad72f43ec744f20cd30674fcdc989a8dfd35b9f8b48.scope - libcontainer container feae6e2751f9445af86c1ad72f43ec744f20cd30674fcdc989a8dfd35b9f8b48. Jul 2 00:24:13.399554 containerd[1977]: time="2024-07-02T00:24:13.398975845Z" level=info msg="StartContainer for \"feae6e2751f9445af86c1ad72f43ec744f20cd30674fcdc989a8dfd35b9f8b48\" returns successfully" Jul 2 00:24:13.836468 kubelet[3500]: E0702 00:24:13.836225 3500 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:13.836468 kubelet[3500]: E0702 00:24:13.836306 3500 projected.go:200] Error preparing data for projected volume kube-api-access-mq5lr for pod tigera-operator/tigera-operator-76c4974c85-k7h2h: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:13.837421 kubelet[3500]: E0702 00:24:13.836493 3500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5fb685b2-8fcb-432c-8938-98e1a665cecf-kube-api-access-mq5lr podName:5fb685b2-8fcb-432c-8938-98e1a665cecf nodeName:}" failed. No retries permitted until 2024-07-02 00:24:14.336378513 +0000 UTC m=+15.868993075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mq5lr" (UniqueName: "kubernetes.io/projected/5fb685b2-8fcb-432c-8938-98e1a665cecf-kube-api-access-mq5lr") pod "tigera-operator-76c4974c85-k7h2h" (UID: "5fb685b2-8fcb-432c-8938-98e1a665cecf") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:14.029460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104482148.mount: Deactivated successfully. Jul 2 00:24:14.481796 containerd[1977]: time="2024-07-02T00:24:14.481742195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-k7h2h,Uid:5fb685b2-8fcb-432c-8938-98e1a665cecf,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:24:14.551774 containerd[1977]: time="2024-07-02T00:24:14.551614726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:14.553619 containerd[1977]: time="2024-07-02T00:24:14.553011430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:14.553619 containerd[1977]: time="2024-07-02T00:24:14.553112651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:14.553619 containerd[1977]: time="2024-07-02T00:24:14.553138873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:14.613327 systemd[1]: Started cri-containerd-f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d.scope - libcontainer container f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d. Jul 2 00:24:14.703645 containerd[1977]: time="2024-07-02T00:24:14.701389015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-k7h2h,Uid:5fb685b2-8fcb-432c-8938-98e1a665cecf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d\"" Jul 2 00:24:14.726796 containerd[1977]: time="2024-07-02T00:24:14.725405357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:24:16.142779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726907852.mount: Deactivated successfully. Jul 2 00:24:16.940517 containerd[1977]: time="2024-07-02T00:24:16.940472308Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.941860 containerd[1977]: time="2024-07-02T00:24:16.941699030Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076060" Jul 2 00:24:16.943217 containerd[1977]: time="2024-07-02T00:24:16.943140502Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.946288 containerd[1977]: time="2024-07-02T00:24:16.946219926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.947310 containerd[1977]: time="2024-07-02T00:24:16.947134716Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.221586604s" Jul 2 00:24:16.947310 containerd[1977]: time="2024-07-02T00:24:16.947176516Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:24:16.949260 containerd[1977]: time="2024-07-02T00:24:16.949224198Z" level=info msg="CreateContainer within sandbox \"f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:24:16.987846 containerd[1977]: time="2024-07-02T00:24:16.987789534Z" level=info msg="CreateContainer within sandbox \"f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b\"" Jul 2 00:24:16.988615 containerd[1977]: time="2024-07-02T00:24:16.988495425Z" level=info msg="StartContainer for \"e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b\"" Jul 2 00:24:17.037316 systemd[1]: Started cri-containerd-e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b.scope - libcontainer container e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b. Jul 2 00:24:17.075457 containerd[1977]: time="2024-07-02T00:24:17.075380072Z" level=info msg="StartContainer for \"e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b\" returns successfully" Jul 2 00:24:18.055372 kubelet[3500]: I0702 00:24:18.050269 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9jf6q" podStartSLOduration=6.050214544 podStartE2EDuration="6.050214544s" podCreationTimestamp="2024-07-02 00:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:14.049441471 +0000 UTC m=+15.582056042" watchObservedRunningTime="2024-07-02 00:24:18.050214544 +0000 UTC m=+19.582829113" Jul 2 00:24:20.536815 kubelet[3500]: I0702 00:24:20.536745 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-k7h2h" podStartSLOduration=6.297428827 podStartE2EDuration="8.536669148s" podCreationTimestamp="2024-07-02 00:24:12 +0000 UTC" firstStartedPulling="2024-07-02 00:24:14.70844708 +0000 UTC m=+16.241061630" lastFinishedPulling="2024-07-02 00:24:16.947687394 +0000 UTC m=+18.480301951" observedRunningTime="2024-07-02 00:24:18.050888871 +0000 UTC m=+19.583503443" watchObservedRunningTime="2024-07-02 00:24:20.536669148 +0000 UTC m=+22.069283721" Jul 2 00:24:20.539988 kubelet[3500]: I0702 00:24:20.537838 3500 topology_manager.go:215] "Topology Admit Handler" podUID="5499e781-c1f4-4330-8f4c-ccc6872efea9" podNamespace="calico-system" podName="calico-typha-7d5b94b5c7-wzjsc" Jul 2 00:24:20.556783 systemd[1]: Created slice kubepods-besteffort-pod5499e781_c1f4_4330_8f4c_ccc6872efea9.slice - libcontainer container kubepods-besteffort-pod5499e781_c1f4_4330_8f4c_ccc6872efea9.slice. Jul 2 00:24:20.586109 kubelet[3500]: I0702 00:24:20.585866 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjc4x\" (UniqueName: \"kubernetes.io/projected/5499e781-c1f4-4330-8f4c-ccc6872efea9-kube-api-access-xjc4x\") pod \"calico-typha-7d5b94b5c7-wzjsc\" (UID: \"5499e781-c1f4-4330-8f4c-ccc6872efea9\") " pod="calico-system/calico-typha-7d5b94b5c7-wzjsc" Jul 2 00:24:20.586109 kubelet[3500]: I0702 00:24:20.585969 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5499e781-c1f4-4330-8f4c-ccc6872efea9-tigera-ca-bundle\") pod \"calico-typha-7d5b94b5c7-wzjsc\" (UID: \"5499e781-c1f4-4330-8f4c-ccc6872efea9\") " pod="calico-system/calico-typha-7d5b94b5c7-wzjsc" Jul 2 00:24:20.586109 kubelet[3500]: I0702 00:24:20.586008 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5499e781-c1f4-4330-8f4c-ccc6872efea9-typha-certs\") pod \"calico-typha-7d5b94b5c7-wzjsc\" (UID: \"5499e781-c1f4-4330-8f4c-ccc6872efea9\") " pod="calico-system/calico-typha-7d5b94b5c7-wzjsc" Jul 2 00:24:20.747797 kubelet[3500]: I0702 00:24:20.747718 3500 topology_manager.go:215] "Topology Admit Handler" podUID="b55fcff6-4d3a-4edd-94a6-42d2c5a06599" podNamespace="calico-system" podName="calico-node-w7tbv" Jul 2 00:24:20.760855 systemd[1]: Created slice kubepods-besteffort-podb55fcff6_4d3a_4edd_94a6_42d2c5a06599.slice - libcontainer container kubepods-besteffort-podb55fcff6_4d3a_4edd_94a6_42d2c5a06599.slice. Jul 2 00:24:20.795218 kubelet[3500]: I0702 00:24:20.795086 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-var-run-calico\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.795345 kubelet[3500]: I0702 00:24:20.795277 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-tigera-ca-bundle\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.800447 kubelet[3500]: I0702 00:24:20.795436 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-flexvol-driver-host\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.802836 kubelet[3500]: I0702 00:24:20.801458 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqht\" (UniqueName: \"kubernetes.io/projected/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-kube-api-access-zdqht\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803025 kubelet[3500]: I0702 00:24:20.803006 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-xtables-lock\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803172 kubelet[3500]: I0702 00:24:20.803158 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-policysync\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803271 kubelet[3500]: I0702 00:24:20.803259 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-cni-log-dir\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803372 kubelet[3500]: I0702 00:24:20.803356 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-cni-net-dir\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803515 kubelet[3500]: I0702 00:24:20.803489 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-lib-modules\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803585 kubelet[3500]: I0702 00:24:20.803572 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-node-certs\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.803631 kubelet[3500]: I0702 00:24:20.803618 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-var-lib-calico\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.805078 kubelet[3500]: I0702 00:24:20.803691 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b55fcff6-4d3a-4edd-94a6-42d2c5a06599-cni-bin-dir\") pod \"calico-node-w7tbv\" (UID: \"b55fcff6-4d3a-4edd-94a6-42d2c5a06599\") " pod="calico-system/calico-node-w7tbv" Jul 2 00:24:20.868983 containerd[1977]: time="2024-07-02T00:24:20.868935455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d5b94b5c7-wzjsc,Uid:5499e781-c1f4-4330-8f4c-ccc6872efea9,Namespace:calico-system,Attempt:0,}" Jul 2 00:24:20.908033 kubelet[3500]: E0702 00:24:20.907118 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.908033 kubelet[3500]: W0702 00:24:20.907148 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.908033 kubelet[3500]: E0702 00:24:20.907183 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.908033 kubelet[3500]: E0702 00:24:20.907729 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.908033 kubelet[3500]: W0702 00:24:20.907744 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.908033 kubelet[3500]: E0702 00:24:20.907764 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.910079 kubelet[3500]: E0702 00:24:20.909203 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.910079 kubelet[3500]: W0702 00:24:20.909223 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.910079 kubelet[3500]: E0702 00:24:20.909242 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.912632 kubelet[3500]: E0702 00:24:20.911151 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.912632 kubelet[3500]: W0702 00:24:20.911170 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.912632 kubelet[3500]: E0702 00:24:20.911190 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.913526 kubelet[3500]: E0702 00:24:20.913452 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.913848 kubelet[3500]: W0702 00:24:20.913818 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.914268 kubelet[3500]: E0702 00:24:20.914127 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.923532 kubelet[3500]: E0702 00:24:20.923507 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.923722 kubelet[3500]: W0702 00:24:20.923706 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.923827 kubelet[3500]: E0702 00:24:20.923815 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.952299 containerd[1977]: time="2024-07-02T00:24:20.949548900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:20.952299 containerd[1977]: time="2024-07-02T00:24:20.949635830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:20.952299 containerd[1977]: time="2024-07-02T00:24:20.949668918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:20.952299 containerd[1977]: time="2024-07-02T00:24:20.949692483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:20.962791 kubelet[3500]: E0702 00:24:20.961308 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:20.962791 kubelet[3500]: W0702 00:24:20.961334 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:20.962791 kubelet[3500]: E0702 00:24:20.961366 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:20.985590 kubelet[3500]: I0702 00:24:20.985547 3500 topology_manager.go:215] "Topology Admit Handler" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" podNamespace="calico-system" podName="csi-node-driver-zx2v5" Jul 2 00:24:20.987800 kubelet[3500]: E0702 00:24:20.987422 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:21.001342 systemd[1]: Started cri-containerd-42aa9b8191be751e90bb47f62194af3b2206e9b4c5256a86661bfcce8bb89acb.scope - libcontainer container 42aa9b8191be751e90bb47f62194af3b2206e9b4c5256a86661bfcce8bb89acb. Jul 2 00:24:21.078333 containerd[1977]: time="2024-07-02T00:24:21.078206707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7tbv,Uid:b55fcff6-4d3a-4edd-94a6-42d2c5a06599,Namespace:calico-system,Attempt:0,}" Jul 2 00:24:21.084493 kubelet[3500]: E0702 00:24:21.084459 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.084493 kubelet[3500]: W0702 00:24:21.084490 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.084697 kubelet[3500]: E0702 00:24:21.084518 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.085565 kubelet[3500]: E0702 00:24:21.085540 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.085565 kubelet[3500]: W0702 00:24:21.085563 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.085713 kubelet[3500]: E0702 00:24:21.085587 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.086083 kubelet[3500]: E0702 00:24:21.085841 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.086083 kubelet[3500]: W0702 00:24:21.085853 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.086083 kubelet[3500]: E0702 00:24:21.085870 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.086266 kubelet[3500]: E0702 00:24:21.086122 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.086266 kubelet[3500]: W0702 00:24:21.086132 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.086266 kubelet[3500]: E0702 00:24:21.086150 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.086396 kubelet[3500]: E0702 00:24:21.086389 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.086443 kubelet[3500]: W0702 00:24:21.086398 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.086443 kubelet[3500]: E0702 00:24:21.086414 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087077 kubelet[3500]: E0702 00:24:21.086602 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087077 kubelet[3500]: W0702 00:24:21.086612 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087077 kubelet[3500]: E0702 00:24:21.086625 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087077 kubelet[3500]: E0702 00:24:21.086815 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087077 kubelet[3500]: W0702 00:24:21.086823 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087077 kubelet[3500]: E0702 00:24:21.086837 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087378 kubelet[3500]: E0702 00:24:21.087102 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087378 kubelet[3500]: W0702 00:24:21.087112 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087378 kubelet[3500]: E0702 00:24:21.087128 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087378 kubelet[3500]: E0702 00:24:21.087339 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087378 kubelet[3500]: W0702 00:24:21.087347 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087378 kubelet[3500]: E0702 00:24:21.087363 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087701 kubelet[3500]: E0702 00:24:21.087563 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087701 kubelet[3500]: W0702 00:24:21.087572 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087701 kubelet[3500]: E0702 00:24:21.087586 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.087829 kubelet[3500]: E0702 00:24:21.087794 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.087829 kubelet[3500]: W0702 00:24:21.087803 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.087829 kubelet[3500]: E0702 00:24:21.087818 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.090769 kubelet[3500]: E0702 00:24:21.090458 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.090769 kubelet[3500]: W0702 00:24:21.090482 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.090769 kubelet[3500]: E0702 00:24:21.090509 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.090795 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092211 kubelet[3500]: W0702 00:24:21.090806 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.090824 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.091133 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092211 kubelet[3500]: W0702 00:24:21.091144 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.091159 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.091624 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092211 kubelet[3500]: W0702 00:24:21.091636 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.091652 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092211 kubelet[3500]: E0702 00:24:21.091872 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092728 kubelet[3500]: W0702 00:24:21.091882 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092728 kubelet[3500]: E0702 00:24:21.091899 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092728 kubelet[3500]: E0702 00:24:21.092212 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092728 kubelet[3500]: W0702 00:24:21.092223 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092728 kubelet[3500]: E0702 00:24:21.092239 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.092728 kubelet[3500]: E0702 00:24:21.092469 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.092728 kubelet[3500]: W0702 00:24:21.092479 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.092728 kubelet[3500]: E0702 00:24:21.092500 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.093198 kubelet[3500]: E0702 00:24:21.093134 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.093198 kubelet[3500]: W0702 00:24:21.093146 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.093198 kubelet[3500]: E0702 00:24:21.093164 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.094127 kubelet[3500]: E0702 00:24:21.094106 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.094127 kubelet[3500]: W0702 00:24:21.094126 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.094254 kubelet[3500]: E0702 00:24:21.094143 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.108237 kubelet[3500]: E0702 00:24:21.108201 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.108237 kubelet[3500]: W0702 00:24:21.108234 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.108436 kubelet[3500]: E0702 00:24:21.108264 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.108436 kubelet[3500]: I0702 00:24:21.108320 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f1917ed-818f-4ea3-bce5-d0bf3952f03b-socket-dir\") pod \"csi-node-driver-zx2v5\" (UID: \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\") " pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:21.110167 kubelet[3500]: E0702 00:24:21.110137 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.110167 kubelet[3500]: W0702 00:24:21.110165 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.110432 kubelet[3500]: E0702 00:24:21.110208 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.110543 kubelet[3500]: E0702 00:24:21.110526 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.110543 kubelet[3500]: W0702 00:24:21.110545 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.110689 kubelet[3500]: E0702 00:24:21.110639 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.110961 kubelet[3500]: E0702 00:24:21.110943 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.110961 kubelet[3500]: W0702 00:24:21.110958 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.111095 kubelet[3500]: E0702 00:24:21.110977 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.111095 kubelet[3500]: I0702 00:24:21.111017 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f1917ed-818f-4ea3-bce5-d0bf3952f03b-kubelet-dir\") pod \"csi-node-driver-zx2v5\" (UID: \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\") " pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:21.111340 kubelet[3500]: E0702 00:24:21.111322 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.111418 kubelet[3500]: W0702 00:24:21.111341 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.111418 kubelet[3500]: E0702 00:24:21.111389 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.111698 kubelet[3500]: I0702 00:24:21.111661 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f1917ed-818f-4ea3-bce5-d0bf3952f03b-registration-dir\") pod \"csi-node-driver-zx2v5\" (UID: \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\") " pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:21.114160 kubelet[3500]: E0702 00:24:21.114139 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.114160 kubelet[3500]: W0702 00:24:21.114158 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.114322 kubelet[3500]: E0702 00:24:21.114187 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.114973 kubelet[3500]: E0702 00:24:21.114952 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.114973 kubelet[3500]: W0702 00:24:21.114972 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.116127 kubelet[3500]: E0702 00:24:21.116103 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.116568 kubelet[3500]: E0702 00:24:21.116306 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.116568 kubelet[3500]: W0702 00:24:21.116320 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.116568 kubelet[3500]: E0702 00:24:21.116406 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.116568 kubelet[3500]: I0702 00:24:21.116442 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4g8\" (UniqueName: \"kubernetes.io/projected/9f1917ed-818f-4ea3-bce5-d0bf3952f03b-kube-api-access-mk4g8\") pod \"csi-node-driver-zx2v5\" (UID: \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\") " pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:21.117554 kubelet[3500]: E0702 00:24:21.117081 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.117554 kubelet[3500]: W0702 00:24:21.117098 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.117554 kubelet[3500]: E0702 00:24:21.117127 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.118448 kubelet[3500]: E0702 00:24:21.118368 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.118448 kubelet[3500]: W0702 00:24:21.118383 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.118699 kubelet[3500]: E0702 00:24:21.118602 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.120065 kubelet[3500]: E0702 00:24:21.119671 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.120065 kubelet[3500]: W0702 00:24:21.119686 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.120460 kubelet[3500]: E0702 00:24:21.120324 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.120460 kubelet[3500]: I0702 00:24:21.120365 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9f1917ed-818f-4ea3-bce5-d0bf3952f03b-varrun\") pod \"csi-node-driver-zx2v5\" (UID: \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\") " pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:21.121271 kubelet[3500]: E0702 00:24:21.121137 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.121271 kubelet[3500]: W0702 00:24:21.121154 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.121271 kubelet[3500]: E0702 00:24:21.121194 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.122104 kubelet[3500]: E0702 00:24:21.121658 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.122104 kubelet[3500]: W0702 00:24:21.121682 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.122104 kubelet[3500]: E0702 00:24:21.121713 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.122806 kubelet[3500]: E0702 00:24:21.122490 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.122806 kubelet[3500]: W0702 00:24:21.122505 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.122806 kubelet[3500]: E0702 00:24:21.122625 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.125079 kubelet[3500]: E0702 00:24:21.123312 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.125079 kubelet[3500]: W0702 00:24:21.123327 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.125079 kubelet[3500]: E0702 00:24:21.123345 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.151786 containerd[1977]: time="2024-07-02T00:24:21.151660989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:21.155797 containerd[1977]: time="2024-07-02T00:24:21.151747015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:21.155797 containerd[1977]: time="2024-07-02T00:24:21.151908992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:21.155797 containerd[1977]: time="2024-07-02T00:24:21.152086276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:21.187485 systemd[1]: Started cri-containerd-325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2.scope - libcontainer container 325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2. Jul 2 00:24:21.222073 kubelet[3500]: E0702 00:24:21.221901 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.222073 kubelet[3500]: W0702 00:24:21.221928 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.222073 kubelet[3500]: E0702 00:24:21.221956 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.223726 kubelet[3500]: E0702 00:24:21.223017 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.223726 kubelet[3500]: W0702 00:24:21.223039 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.223726 kubelet[3500]: E0702 00:24:21.223086 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.224408 kubelet[3500]: E0702 00:24:21.224243 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.224408 kubelet[3500]: W0702 00:24:21.224261 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.224607 kubelet[3500]: E0702 00:24:21.224423 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.224775 kubelet[3500]: E0702 00:24:21.224728 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.224775 kubelet[3500]: W0702 00:24:21.224739 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.225111 kubelet[3500]: E0702 00:24:21.225004 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.225569 kubelet[3500]: E0702 00:24:21.225487 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.225569 kubelet[3500]: W0702 00:24:21.225501 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.226021 kubelet[3500]: E0702 00:24:21.225902 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.226375 kubelet[3500]: E0702 00:24:21.226255 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.226375 kubelet[3500]: W0702 00:24:21.226269 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.226375 kubelet[3500]: E0702 00:24:21.226336 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.227174 kubelet[3500]: E0702 00:24:21.227038 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.227174 kubelet[3500]: W0702 00:24:21.227071 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.227367 kubelet[3500]: E0702 00:24:21.227333 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.227609 kubelet[3500]: E0702 00:24:21.227598 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.227762 kubelet[3500]: W0702 00:24:21.227708 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.227762 kubelet[3500]: E0702 00:24:21.227744 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.228599 kubelet[3500]: E0702 00:24:21.228524 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.228599 kubelet[3500]: W0702 00:24:21.228542 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.228599 kubelet[3500]: E0702 00:24:21.228580 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.229471 kubelet[3500]: E0702 00:24:21.229396 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.229471 kubelet[3500]: W0702 00:24:21.229413 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.229881 kubelet[3500]: E0702 00:24:21.229782 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.229881 kubelet[3500]: W0702 00:24:21.229796 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.229989 kubelet[3500]: E0702 00:24:21.229895 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.229989 kubelet[3500]: E0702 00:24:21.229977 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.230286 kubelet[3500]: E0702 00:24:21.230181 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.230458 kubelet[3500]: W0702 00:24:21.230356 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.230569 kubelet[3500]: E0702 00:24:21.230554 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.230691 kubelet[3500]: E0702 00:24:21.230682 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.230798 kubelet[3500]: W0702 00:24:21.230745 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.231640 kubelet[3500]: E0702 00:24:21.231538 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.231640 kubelet[3500]: W0702 00:24:21.231552 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.231891 kubelet[3500]: E0702 00:24:21.231880 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.232074 kubelet[3500]: W0702 00:24:21.231969 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.233097 kubelet[3500]: E0702 00:24:21.232893 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.233097 kubelet[3500]: W0702 00:24:21.232910 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.233202 kubelet[3500]: E0702 00:24:21.233134 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.233202 kubelet[3500]: E0702 00:24:21.233185 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.234452 kubelet[3500]: E0702 00:24:21.234227 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.234452 kubelet[3500]: E0702 00:24:21.234257 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.234452 kubelet[3500]: E0702 00:24:21.234294 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.234452 kubelet[3500]: W0702 00:24:21.234302 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.234452 kubelet[3500]: E0702 00:24:21.234318 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.235030 kubelet[3500]: E0702 00:24:21.235010 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.235131 kubelet[3500]: W0702 00:24:21.235028 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.235581 kubelet[3500]: E0702 00:24:21.235562 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.235748 kubelet[3500]: E0702 00:24:21.235653 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.235748 kubelet[3500]: W0702 00:24:21.235666 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.235952 kubelet[3500]: E0702 00:24:21.235860 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.236284 kubelet[3500]: E0702 00:24:21.236190 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.236284 kubelet[3500]: W0702 00:24:21.236202 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.236439 kubelet[3500]: E0702 00:24:21.236408 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.236719 kubelet[3500]: E0702 00:24:21.236624 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.236719 kubelet[3500]: W0702 00:24:21.236636 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.236719 kubelet[3500]: E0702 00:24:21.236701 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.237168 kubelet[3500]: E0702 00:24:21.237008 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.237168 kubelet[3500]: W0702 00:24:21.237020 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.237168 kubelet[3500]: E0702 00:24:21.237090 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.237713 kubelet[3500]: E0702 00:24:21.237567 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.237713 kubelet[3500]: W0702 00:24:21.237579 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.237713 kubelet[3500]: E0702 00:24:21.237617 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.238288 kubelet[3500]: E0702 00:24:21.238264 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.238288 kubelet[3500]: W0702 00:24:21.238284 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.239434 kubelet[3500]: E0702 00:24:21.238302 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.239434 kubelet[3500]: E0702 00:24:21.238879 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.239434 kubelet[3500]: W0702 00:24:21.238890 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.239434 kubelet[3500]: E0702 00:24:21.238906 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.264801 kubelet[3500]: E0702 00:24:21.264768 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:21.265030 kubelet[3500]: W0702 00:24:21.264952 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:21.265030 kubelet[3500]: E0702 00:24:21.264981 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:21.299463 containerd[1977]: time="2024-07-02T00:24:21.299419816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d5b94b5c7-wzjsc,Uid:5499e781-c1f4-4330-8f4c-ccc6872efea9,Namespace:calico-system,Attempt:0,} returns sandbox id \"42aa9b8191be751e90bb47f62194af3b2206e9b4c5256a86661bfcce8bb89acb\"" Jul 2 00:24:21.303441 containerd[1977]: time="2024-07-02T00:24:21.303368736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:24:21.362956 containerd[1977]: time="2024-07-02T00:24:21.362749960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7tbv,Uid:b55fcff6-4d3a-4edd-94a6-42d2c5a06599,Namespace:calico-system,Attempt:0,} returns sandbox id \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\"" Jul 2 00:24:22.787325 kubelet[3500]: E0702 00:24:22.786891 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:24.737147 containerd[1977]: time="2024-07-02T00:24:24.737079194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.738749 containerd[1977]: time="2024-07-02T00:24:24.738667093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:24:24.769082 containerd[1977]: time="2024-07-02T00:24:24.768150929Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.788602 kubelet[3500]: E0702 00:24:24.788558 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:24.820218 containerd[1977]: time="2024-07-02T00:24:24.820167663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.822360 containerd[1977]: time="2024-07-02T00:24:24.821623210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.518200291s" Jul 2 00:24:24.822360 containerd[1977]: time="2024-07-02T00:24:24.821673580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:24:24.826805 containerd[1977]: time="2024-07-02T00:24:24.826762661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:24:24.869912 containerd[1977]: time="2024-07-02T00:24:24.868756942Z" level=info msg="CreateContainer within sandbox \"42aa9b8191be751e90bb47f62194af3b2206e9b4c5256a86661bfcce8bb89acb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:24:24.960729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367790931.mount: Deactivated successfully. Jul 2 00:24:24.997561 containerd[1977]: time="2024-07-02T00:24:24.996880267Z" level=info msg="CreateContainer within sandbox \"42aa9b8191be751e90bb47f62194af3b2206e9b4c5256a86661bfcce8bb89acb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1f5c573de80e7fb913ee6c15c99f525c19107e75307e1f2535419f875d074833\"" Jul 2 00:24:25.013483 containerd[1977]: time="2024-07-02T00:24:25.013429198Z" level=info msg="StartContainer for \"1f5c573de80e7fb913ee6c15c99f525c19107e75307e1f2535419f875d074833\"" Jul 2 00:24:25.141907 systemd[1]: Started cri-containerd-1f5c573de80e7fb913ee6c15c99f525c19107e75307e1f2535419f875d074833.scope - libcontainer container 1f5c573de80e7fb913ee6c15c99f525c19107e75307e1f2535419f875d074833. Jul 2 00:24:25.315765 containerd[1977]: time="2024-07-02T00:24:25.315618213Z" level=info msg="StartContainer for \"1f5c573de80e7fb913ee6c15c99f525c19107e75307e1f2535419f875d074833\" returns successfully" Jul 2 00:24:26.110415 kubelet[3500]: I0702 00:24:26.110366 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7d5b94b5c7-wzjsc" podStartSLOduration=2.588704684 podStartE2EDuration="6.110166224s" podCreationTimestamp="2024-07-02 00:24:20 +0000 UTC" firstStartedPulling="2024-07-02 00:24:21.301615131 +0000 UTC m=+22.834229693" lastFinishedPulling="2024-07-02 00:24:24.823076664 +0000 UTC m=+26.355691233" observedRunningTime="2024-07-02 00:24:26.109908455 +0000 UTC m=+27.642523020" watchObservedRunningTime="2024-07-02 00:24:26.110166224 +0000 UTC m=+27.642780791" Jul 2 00:24:26.152755 kubelet[3500]: E0702 00:24:26.152172 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.152755 kubelet[3500]: W0702 00:24:26.152199 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.152755 kubelet[3500]: E0702 00:24:26.152380 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.154193 kubelet[3500]: E0702 00:24:26.153668 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.154193 kubelet[3500]: W0702 00:24:26.153700 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.154193 kubelet[3500]: E0702 00:24:26.153723 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.155184 kubelet[3500]: E0702 00:24:26.154807 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.155184 kubelet[3500]: W0702 00:24:26.154827 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.155184 kubelet[3500]: E0702 00:24:26.154847 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.156256 kubelet[3500]: E0702 00:24:26.155594 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.156256 kubelet[3500]: W0702 00:24:26.155609 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.156256 kubelet[3500]: E0702 00:24:26.155629 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.157490 kubelet[3500]: E0702 00:24:26.157327 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.157490 kubelet[3500]: W0702 00:24:26.157359 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.157490 kubelet[3500]: E0702 00:24:26.157377 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.159034 kubelet[3500]: E0702 00:24:26.158619 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.159034 kubelet[3500]: W0702 00:24:26.158634 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.159034 kubelet[3500]: E0702 00:24:26.158933 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.160156 kubelet[3500]: E0702 00:24:26.159791 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.160156 kubelet[3500]: W0702 00:24:26.159805 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.160156 kubelet[3500]: E0702 00:24:26.159839 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.161184 kubelet[3500]: E0702 00:24:26.160782 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.161184 kubelet[3500]: W0702 00:24:26.160795 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.161184 kubelet[3500]: E0702 00:24:26.160815 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.162315 kubelet[3500]: E0702 00:24:26.161891 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.162315 kubelet[3500]: W0702 00:24:26.161932 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.162315 kubelet[3500]: E0702 00:24:26.161951 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.163702 kubelet[3500]: E0702 00:24:26.162933 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.163702 kubelet[3500]: W0702 00:24:26.162948 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.163702 kubelet[3500]: E0702 00:24:26.162967 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.164656 kubelet[3500]: E0702 00:24:26.164461 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.164656 kubelet[3500]: W0702 00:24:26.164476 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.164656 kubelet[3500]: E0702 00:24:26.164492 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.166393 kubelet[3500]: E0702 00:24:26.166119 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.166393 kubelet[3500]: W0702 00:24:26.166180 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.166393 kubelet[3500]: E0702 00:24:26.166199 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.166958 kubelet[3500]: E0702 00:24:26.166666 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.166958 kubelet[3500]: W0702 00:24:26.166679 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.166958 kubelet[3500]: E0702 00:24:26.166696 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.168070 kubelet[3500]: E0702 00:24:26.167632 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.168070 kubelet[3500]: W0702 00:24:26.167645 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.168070 kubelet[3500]: E0702 00:24:26.167682 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.169353 kubelet[3500]: E0702 00:24:26.168902 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.169353 kubelet[3500]: W0702 00:24:26.168916 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.169353 kubelet[3500]: E0702 00:24:26.168933 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.189208 kubelet[3500]: E0702 00:24:26.189165 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.189769 kubelet[3500]: W0702 00:24:26.189390 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.189769 kubelet[3500]: E0702 00:24:26.189427 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.190149 kubelet[3500]: E0702 00:24:26.189926 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.190149 kubelet[3500]: W0702 00:24:26.189937 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.190149 kubelet[3500]: E0702 00:24:26.189951 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.190721 kubelet[3500]: E0702 00:24:26.190545 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.190721 kubelet[3500]: W0702 00:24:26.190618 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.190721 kubelet[3500]: E0702 00:24:26.190650 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.191702 kubelet[3500]: E0702 00:24:26.191552 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.191702 kubelet[3500]: W0702 00:24:26.191571 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.191702 kubelet[3500]: E0702 00:24:26.191603 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.192394 kubelet[3500]: E0702 00:24:26.192132 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.192394 kubelet[3500]: W0702 00:24:26.192147 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.192394 kubelet[3500]: E0702 00:24:26.192174 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.192933 kubelet[3500]: E0702 00:24:26.192718 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.192933 kubelet[3500]: W0702 00:24:26.192730 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.192933 kubelet[3500]: E0702 00:24:26.192865 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.193579 kubelet[3500]: E0702 00:24:26.193300 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.193579 kubelet[3500]: W0702 00:24:26.193313 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.193579 kubelet[3500]: E0702 00:24:26.193416 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.193984 kubelet[3500]: E0702 00:24:26.193792 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.193984 kubelet[3500]: W0702 00:24:26.193814 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.193984 kubelet[3500]: E0702 00:24:26.193925 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.195382 kubelet[3500]: E0702 00:24:26.194401 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.195382 kubelet[3500]: W0702 00:24:26.194414 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.195382 kubelet[3500]: E0702 00:24:26.194444 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.195919 kubelet[3500]: E0702 00:24:26.195602 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.195919 kubelet[3500]: W0702 00:24:26.195615 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.195919 kubelet[3500]: E0702 00:24:26.195731 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.196242 kubelet[3500]: E0702 00:24:26.196232 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.196387 kubelet[3500]: W0702 00:24:26.196303 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.196474 kubelet[3500]: E0702 00:24:26.196463 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.196594 kubelet[3500]: E0702 00:24:26.196585 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.196696 kubelet[3500]: W0702 00:24:26.196684 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.197163 kubelet[3500]: E0702 00:24:26.197088 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.198121 kubelet[3500]: E0702 00:24:26.197875 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.198121 kubelet[3500]: W0702 00:24:26.197930 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.198121 kubelet[3500]: E0702 00:24:26.197970 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.199324 kubelet[3500]: E0702 00:24:26.199003 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.199324 kubelet[3500]: W0702 00:24:26.199017 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.199984 kubelet[3500]: E0702 00:24:26.199631 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.200172 kubelet[3500]: E0702 00:24:26.200154 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.200476 kubelet[3500]: W0702 00:24:26.200267 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.200476 kubelet[3500]: E0702 00:24:26.200308 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.206633 kubelet[3500]: E0702 00:24:26.205282 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.206633 kubelet[3500]: W0702 00:24:26.205392 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.206633 kubelet[3500]: E0702 00:24:26.205414 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.209478 kubelet[3500]: E0702 00:24:26.209455 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.209701 kubelet[3500]: W0702 00:24:26.209686 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.209831 kubelet[3500]: E0702 00:24:26.209811 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.230903 kubelet[3500]: E0702 00:24:26.230876 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:24:26.231682 kubelet[3500]: W0702 00:24:26.231630 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:24:26.232005 kubelet[3500]: E0702 00:24:26.231895 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:24:26.381299 containerd[1977]: time="2024-07-02T00:24:26.381171929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.384940 containerd[1977]: time="2024-07-02T00:24:26.384878058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:24:26.386255 containerd[1977]: time="2024-07-02T00:24:26.386214142Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.405035 containerd[1977]: time="2024-07-02T00:24:26.404962839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.407137 containerd[1977]: time="2024-07-02T00:24:26.406116841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.579304932s" Jul 2 00:24:26.407137 containerd[1977]: time="2024-07-02T00:24:26.406164023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:24:26.409281 containerd[1977]: time="2024-07-02T00:24:26.409110845Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:24:26.453786 containerd[1977]: time="2024-07-02T00:24:26.453740409Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee\"" Jul 2 00:24:26.457097 containerd[1977]: time="2024-07-02T00:24:26.455296858Z" level=info msg="StartContainer for \"326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee\"" Jul 2 00:24:26.517740 systemd[1]: Started cri-containerd-326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee.scope - libcontainer container 326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee. Jul 2 00:24:26.581240 containerd[1977]: time="2024-07-02T00:24:26.580658459Z" level=info msg="StartContainer for \"326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee\" returns successfully" Jul 2 00:24:26.602643 systemd[1]: cri-containerd-326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee.scope: Deactivated successfully. Jul 2 00:24:26.650004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee-rootfs.mount: Deactivated successfully. Jul 2 00:24:26.787001 kubelet[3500]: E0702 00:24:26.786962 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:27.086609 kubelet[3500]: I0702 00:24:27.086575 3500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:24:27.595379 containerd[1977]: time="2024-07-02T00:24:27.550154806Z" level=info msg="shim disconnected" id=326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee namespace=k8s.io Jul 2 00:24:27.595379 containerd[1977]: time="2024-07-02T00:24:27.595118751Z" level=warning msg="cleaning up after shim disconnected" id=326c61915c47de6c833fbae046c49b6598a7aa85e40afcadafd143ef57a5c9ee namespace=k8s.io Jul 2 00:24:27.595379 containerd[1977]: time="2024-07-02T00:24:27.595141271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:28.093140 containerd[1977]: time="2024-07-02T00:24:28.092977429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:24:28.788072 kubelet[3500]: E0702 00:24:28.786280 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:30.795912 kubelet[3500]: E0702 00:24:30.786713 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:32.787509 kubelet[3500]: E0702 00:24:32.787440 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:33.892108 containerd[1977]: time="2024-07-02T00:24:33.892036278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:33.893734 containerd[1977]: time="2024-07-02T00:24:33.893666746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:24:33.897297 containerd[1977]: time="2024-07-02T00:24:33.895758483Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:33.904528 containerd[1977]: time="2024-07-02T00:24:33.904481088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:33.906679 containerd[1977]: time="2024-07-02T00:24:33.906597609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.813572848s" Jul 2 00:24:33.907073 containerd[1977]: time="2024-07-02T00:24:33.907025088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:24:33.911342 containerd[1977]: time="2024-07-02T00:24:33.911299256Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:24:33.960564 containerd[1977]: time="2024-07-02T00:24:33.960519040Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415\"" Jul 2 00:24:33.961882 containerd[1977]: time="2024-07-02T00:24:33.961847248Z" level=info msg="StartContainer for \"1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415\"" Jul 2 00:24:34.096297 systemd[1]: Started cri-containerd-1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415.scope - libcontainer container 1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415. Jul 2 00:24:34.171002 containerd[1977]: time="2024-07-02T00:24:34.170874975Z" level=info msg="StartContainer for \"1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415\" returns successfully" Jul 2 00:24:34.786153 kubelet[3500]: E0702 00:24:34.786104 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:36.212630 systemd[1]: cri-containerd-1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415.scope: Deactivated successfully. Jul 2 00:24:36.268814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415-rootfs.mount: Deactivated successfully. Jul 2 00:24:36.285431 kubelet[3500]: I0702 00:24:36.271359 3500 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:24:36.326414 kubelet[3500]: I0702 00:24:36.325428 3500 topology_manager.go:215] "Topology Admit Handler" podUID="6e96f5d9-454f-47bc-982e-380f36019f9f" podNamespace="kube-system" podName="coredns-76f75df574-4snj8" Jul 2 00:24:36.342350 systemd[1]: Created slice kubepods-burstable-pod6e96f5d9_454f_47bc_982e_380f36019f9f.slice - libcontainer container kubepods-burstable-pod6e96f5d9_454f_47bc_982e_380f36019f9f.slice. Jul 2 00:24:36.351932 kubelet[3500]: I0702 00:24:36.351894 3500 topology_manager.go:215] "Topology Admit Handler" podUID="31caa2b5-ff16-4a24-8e21-58c443916572" podNamespace="calico-system" podName="calico-kube-controllers-854b76f5d4-mrn7l" Jul 2 00:24:36.355619 kubelet[3500]: I0702 00:24:36.355577 3500 topology_manager.go:215] "Topology Admit Handler" podUID="fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19" podNamespace="kube-system" podName="coredns-76f75df574-8dl4w" Jul 2 00:24:36.361083 kubelet[3500]: W0702 00:24:36.359659 3500 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:36.361083 kubelet[3500]: E0702 00:24:36.359707 3500 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-56" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-56' and this object Jul 2 00:24:36.375372 systemd[1]: Created slice kubepods-burstable-podfc4fe4ff_5de6_468b_9f5f_7ee5153f7d19.slice - libcontainer container kubepods-burstable-podfc4fe4ff_5de6_468b_9f5f_7ee5153f7d19.slice. Jul 2 00:24:36.401628 systemd[1]: Created slice kubepods-besteffort-pod31caa2b5_ff16_4a24_8e21_58c443916572.slice - libcontainer container kubepods-besteffort-pod31caa2b5_ff16_4a24_8e21_58c443916572.slice. Jul 2 00:24:36.423406 kubelet[3500]: I0702 00:24:36.423218 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e96f5d9-454f-47bc-982e-380f36019f9f-config-volume\") pod \"coredns-76f75df574-4snj8\" (UID: \"6e96f5d9-454f-47bc-982e-380f36019f9f\") " pod="kube-system/coredns-76f75df574-4snj8" Jul 2 00:24:36.436375 kubelet[3500]: I0702 00:24:36.435865 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxvf\" (UniqueName: \"kubernetes.io/projected/6e96f5d9-454f-47bc-982e-380f36019f9f-kube-api-access-jwxvf\") pod \"coredns-76f75df574-4snj8\" (UID: \"6e96f5d9-454f-47bc-982e-380f36019f9f\") " pod="kube-system/coredns-76f75df574-4snj8" Jul 2 00:24:36.536783 kubelet[3500]: I0702 00:24:36.536642 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xxc\" (UniqueName: \"kubernetes.io/projected/fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19-kube-api-access-74xxc\") pod \"coredns-76f75df574-8dl4w\" (UID: \"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19\") " pod="kube-system/coredns-76f75df574-8dl4w" Jul 2 00:24:36.536972 kubelet[3500]: I0702 00:24:36.536851 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19-config-volume\") pod \"coredns-76f75df574-8dl4w\" (UID: \"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19\") " pod="kube-system/coredns-76f75df574-8dl4w" Jul 2 00:24:36.536972 kubelet[3500]: I0702 00:24:36.536906 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31caa2b5-ff16-4a24-8e21-58c443916572-tigera-ca-bundle\") pod \"calico-kube-controllers-854b76f5d4-mrn7l\" (UID: \"31caa2b5-ff16-4a24-8e21-58c443916572\") " pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" Jul 2 00:24:36.536972 kubelet[3500]: I0702 00:24:36.536936 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqtkb\" (UniqueName: \"kubernetes.io/projected/31caa2b5-ff16-4a24-8e21-58c443916572-kube-api-access-pqtkb\") pod \"calico-kube-controllers-854b76f5d4-mrn7l\" (UID: \"31caa2b5-ff16-4a24-8e21-58c443916572\") " pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" Jul 2 00:24:36.583918 containerd[1977]: time="2024-07-02T00:24:36.583644610Z" level=info msg="shim disconnected" id=1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415 namespace=k8s.io Jul 2 00:24:36.583918 containerd[1977]: time="2024-07-02T00:24:36.583710065Z" level=warning msg="cleaning up after shim disconnected" id=1141721fccb846eb806475d7ba9a08694673eb03cdc386e3b58e1c520a724415 namespace=k8s.io Jul 2 00:24:36.583918 containerd[1977]: time="2024-07-02T00:24:36.583727318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:36.717983 containerd[1977]: time="2024-07-02T00:24:36.717932667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854b76f5d4-mrn7l,Uid:31caa2b5-ff16-4a24-8e21-58c443916572,Namespace:calico-system,Attempt:0,}" Jul 2 00:24:36.796752 systemd[1]: Created slice kubepods-besteffort-pod9f1917ed_818f_4ea3_bce5_d0bf3952f03b.slice - libcontainer container kubepods-besteffort-pod9f1917ed_818f_4ea3_bce5_d0bf3952f03b.slice. Jul 2 00:24:36.802399 containerd[1977]: time="2024-07-02T00:24:36.802303650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx2v5,Uid:9f1917ed-818f-4ea3-bce5-d0bf3952f03b,Namespace:calico-system,Attempt:0,}" Jul 2 00:24:37.068083 containerd[1977]: time="2024-07-02T00:24:37.066691776Z" level=error msg="Failed to destroy network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.075182 containerd[1977]: time="2024-07-02T00:24:37.075119215Z" level=error msg="Failed to destroy network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.076352 containerd[1977]: time="2024-07-02T00:24:37.076296859Z" level=error msg="encountered an error cleaning up failed sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.076566 containerd[1977]: time="2024-07-02T00:24:37.076479234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx2v5,Uid:9f1917ed-818f-4ea3-bce5-d0bf3952f03b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.076623 containerd[1977]: time="2024-07-02T00:24:37.076296415Z" level=error msg="encountered an error cleaning up failed sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.076758 containerd[1977]: time="2024-07-02T00:24:37.076641594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854b76f5d4-mrn7l,Uid:31caa2b5-ff16-4a24-8e21-58c443916572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.077075 kubelet[3500]: E0702 00:24:37.077027 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.077316 kubelet[3500]: E0702 00:24:37.077155 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:37.077316 kubelet[3500]: E0702 00:24:37.077192 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zx2v5" Jul 2 00:24:37.077316 kubelet[3500]: E0702 00:24:37.077255 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zx2v5_calico-system(9f1917ed-818f-4ea3-bce5-d0bf3952f03b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zx2v5_calico-system(9f1917ed-818f-4ea3-bce5-d0bf3952f03b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:37.078084 kubelet[3500]: E0702 00:24:37.077506 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.078084 kubelet[3500]: E0702 00:24:37.077542 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" Jul 2 00:24:37.078084 kubelet[3500]: E0702 00:24:37.077568 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" Jul 2 00:24:37.078338 kubelet[3500]: E0702 00:24:37.077739 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854b76f5d4-mrn7l_calico-system(31caa2b5-ff16-4a24-8e21-58c443916572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854b76f5d4-mrn7l_calico-system(31caa2b5-ff16-4a24-8e21-58c443916572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" podUID="31caa2b5-ff16-4a24-8e21-58c443916572" Jul 2 00:24:37.145757 containerd[1977]: time="2024-07-02T00:24:37.144885337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:24:37.147408 kubelet[3500]: I0702 00:24:37.147382 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:37.185094 containerd[1977]: time="2024-07-02T00:24:37.184955105Z" level=info msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" Jul 2 00:24:37.191377 kubelet[3500]: I0702 00:24:37.190701 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:37.205070 containerd[1977]: time="2024-07-02T00:24:37.203411001Z" level=info msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" Jul 2 00:24:37.205070 containerd[1977]: time="2024-07-02T00:24:37.203734426Z" level=info msg="Ensure that sandbox 176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c in task-service has been cleanup successfully" Jul 2 00:24:37.218080 containerd[1977]: time="2024-07-02T00:24:37.218010470Z" level=info msg="Ensure that sandbox 2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e in task-service has been cleanup successfully" Jul 2 00:24:37.272144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c-shm.mount: Deactivated successfully. Jul 2 00:24:37.272274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e-shm.mount: Deactivated successfully. Jul 2 00:24:37.315386 containerd[1977]: time="2024-07-02T00:24:37.315331890Z" level=error msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" failed" error="failed to destroy network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.316259 kubelet[3500]: E0702 00:24:37.316228 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:37.317463 kubelet[3500]: E0702 00:24:37.317440 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c"} Jul 2 00:24:37.317635 kubelet[3500]: E0702 00:24:37.317622 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:37.317803 kubelet[3500]: E0702 00:24:37.317791 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f1917ed-818f-4ea3-bce5-d0bf3952f03b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zx2v5" podUID="9f1917ed-818f-4ea3-bce5-d0bf3952f03b" Jul 2 00:24:37.319813 containerd[1977]: time="2024-07-02T00:24:37.319663087Z" level=error msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" failed" error="failed to destroy network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:37.320567 kubelet[3500]: E0702 00:24:37.320313 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:37.320567 kubelet[3500]: E0702 00:24:37.320358 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e"} Jul 2 00:24:37.320567 kubelet[3500]: E0702 00:24:37.320411 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31caa2b5-ff16-4a24-8e21-58c443916572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:37.320567 kubelet[3500]: E0702 00:24:37.320529 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31caa2b5-ff16-4a24-8e21-58c443916572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" podUID="31caa2b5-ff16-4a24-8e21-58c443916572" Jul 2 00:24:37.537062 kubelet[3500]: E0702 00:24:37.536958 3500 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:37.537403 kubelet[3500]: E0702 00:24:37.537210 3500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e96f5d9-454f-47bc-982e-380f36019f9f-config-volume podName:6e96f5d9-454f-47bc-982e-380f36019f9f nodeName:}" failed. No retries permitted until 2024-07-02 00:24:38.037180548 +0000 UTC m=+39.569795101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6e96f5d9-454f-47bc-982e-380f36019f9f-config-volume") pod "coredns-76f75df574-4snj8" (UID: "6e96f5d9-454f-47bc-982e-380f36019f9f") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:37.643551 kubelet[3500]: E0702 00:24:37.643312 3500 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:37.643551 kubelet[3500]: E0702 00:24:37.643514 3500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19-config-volume podName:fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19 nodeName:}" failed. No retries permitted until 2024-07-02 00:24:38.143490087 +0000 UTC m=+39.676104639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19-config-volume") pod "coredns-76f75df574-8dl4w" (UID: "fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:38.166609 containerd[1977]: time="2024-07-02T00:24:38.166553548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4snj8,Uid:6e96f5d9-454f-47bc-982e-380f36019f9f,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:38.201765 containerd[1977]: time="2024-07-02T00:24:38.196342472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8dl4w,Uid:fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:38.392588 containerd[1977]: time="2024-07-02T00:24:38.392531033Z" level=error msg="Failed to destroy network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.393775 containerd[1977]: time="2024-07-02T00:24:38.393736913Z" level=error msg="encountered an error cleaning up failed sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.393929 containerd[1977]: time="2024-07-02T00:24:38.393902324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4snj8,Uid:6e96f5d9-454f-47bc-982e-380f36019f9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.394270 kubelet[3500]: E0702 00:24:38.394248 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.395441 kubelet[3500]: E0702 00:24:38.394899 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4snj8" Jul 2 00:24:38.395441 kubelet[3500]: E0702 00:24:38.394943 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4snj8" Jul 2 00:24:38.395441 kubelet[3500]: E0702 00:24:38.395017 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4snj8_kube-system(6e96f5d9-454f-47bc-982e-380f36019f9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4snj8_kube-system(6e96f5d9-454f-47bc-982e-380f36019f9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4snj8" podUID="6e96f5d9-454f-47bc-982e-380f36019f9f" Jul 2 00:24:38.401265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c-shm.mount: Deactivated successfully. Jul 2 00:24:38.426016 containerd[1977]: time="2024-07-02T00:24:38.425883368Z" level=error msg="Failed to destroy network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.429282 containerd[1977]: time="2024-07-02T00:24:38.427392991Z" level=error msg="encountered an error cleaning up failed sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.429282 containerd[1977]: time="2024-07-02T00:24:38.427472964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8dl4w,Uid:fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.434608 kubelet[3500]: E0702 00:24:38.432103 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:38.434608 kubelet[3500]: E0702 00:24:38.432215 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8dl4w" Jul 2 00:24:38.434608 kubelet[3500]: E0702 00:24:38.432244 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-8dl4w" Jul 2 00:24:38.433911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20-shm.mount: Deactivated successfully. Jul 2 00:24:38.435558 kubelet[3500]: E0702 00:24:38.432649 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-8dl4w_kube-system(fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-8dl4w_kube-system(fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8dl4w" podUID="fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19" Jul 2 00:24:39.204674 kubelet[3500]: I0702 00:24:39.204640 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:39.205776 containerd[1977]: time="2024-07-02T00:24:39.205561134Z" level=info msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" Jul 2 00:24:39.207345 kubelet[3500]: I0702 00:24:39.207194 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:39.208226 containerd[1977]: time="2024-07-02T00:24:39.207746472Z" level=info msg="Ensure that sandbox ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c in task-service has been cleanup successfully" Jul 2 00:24:39.208489 containerd[1977]: time="2024-07-02T00:24:39.208456942Z" level=info msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" Jul 2 00:24:39.211198 containerd[1977]: time="2024-07-02T00:24:39.208709540Z" level=info msg="Ensure that sandbox b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20 in task-service has been cleanup successfully" Jul 2 00:24:39.336540 containerd[1977]: time="2024-07-02T00:24:39.336303514Z" level=error msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" failed" error="failed to destroy network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:39.336694 kubelet[3500]: E0702 00:24:39.336587 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:39.336694 kubelet[3500]: E0702 00:24:39.336638 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c"} Jul 2 00:24:39.336694 kubelet[3500]: E0702 00:24:39.336685 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e96f5d9-454f-47bc-982e-380f36019f9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:39.336907 kubelet[3500]: E0702 00:24:39.336723 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e96f5d9-454f-47bc-982e-380f36019f9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4snj8" podUID="6e96f5d9-454f-47bc-982e-380f36019f9f" Jul 2 00:24:39.337514 containerd[1977]: time="2024-07-02T00:24:39.337473320Z" level=error msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" failed" error="failed to destroy network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:39.337889 kubelet[3500]: E0702 00:24:39.337676 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:39.337889 kubelet[3500]: E0702 00:24:39.337705 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20"} Jul 2 00:24:39.337889 kubelet[3500]: E0702 00:24:39.337748 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:39.337889 kubelet[3500]: E0702 00:24:39.337776 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-8dl4w" podUID="fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19" Jul 2 00:24:45.158670 systemd[1]: Started sshd@9-172.31.19.56:22-147.75.109.163:40972.service - OpenSSH per-connection server daemon (147.75.109.163:40972). Jul 2 00:24:45.413391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651212014.mount: Deactivated successfully. Jul 2 00:24:45.541645 sshd[4467]: Accepted publickey for core from 147.75.109.163 port 40972 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:24:45.544649 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:24:45.554148 systemd-logind[1957]: New session 10 of user core. Jul 2 00:24:45.564530 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:24:45.591337 containerd[1977]: time="2024-07-02T00:24:45.591280489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:45.596579 containerd[1977]: time="2024-07-02T00:24:45.596527811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:24:45.603954 containerd[1977]: time="2024-07-02T00:24:45.602786123Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:45.616375 containerd[1977]: time="2024-07-02T00:24:45.616325225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:45.617172 containerd[1977]: time="2024-07-02T00:24:45.617126059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.472182481s" Jul 2 00:24:45.617649 containerd[1977]: time="2024-07-02T00:24:45.617177884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:24:45.786015 containerd[1977]: time="2024-07-02T00:24:45.785632592Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:24:45.915031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542279911.mount: Deactivated successfully. Jul 2 00:24:45.917455 containerd[1977]: time="2024-07-02T00:24:45.917346409Z" level=info msg="CreateContainer within sandbox \"325f5384ae5089b9ffb8ae7679afb0917802ec26d8ba6fe52e5eb6e6f87210d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9\"" Jul 2 00:24:45.930493 containerd[1977]: time="2024-07-02T00:24:45.930316950Z" level=info msg="StartContainer for \"15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9\"" Jul 2 00:24:45.992209 sshd[4467]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:46.000738 systemd[1]: sshd@9-172.31.19.56:22-147.75.109.163:40972.service: Deactivated successfully. Jul 2 00:24:46.006429 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:24:46.009190 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:24:46.012535 systemd-logind[1957]: Removed session 10. Jul 2 00:24:46.020372 systemd[1]: Started cri-containerd-15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9.scope - libcontainer container 15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9. Jul 2 00:24:46.175212 containerd[1977]: time="2024-07-02T00:24:46.172242683Z" level=info msg="StartContainer for \"15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9\" returns successfully" Jul 2 00:24:46.506932 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:24:46.514831 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:24:48.807846 systemd-networkd[1872]: vxlan.calico: Link UP Jul 2 00:24:48.807859 systemd-networkd[1872]: vxlan.calico: Gained carrier Jul 2 00:24:48.808836 (udev-worker)[4541]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:48.857871 (udev-worker)[4764]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:48.858837 (udev-worker)[4766]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:49.972478 systemd-networkd[1872]: vxlan.calico: Gained IPv6LL Jul 2 00:24:50.788891 containerd[1977]: time="2024-07-02T00:24:50.788176331Z" level=info msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" Jul 2 00:24:50.788891 containerd[1977]: time="2024-07-02T00:24:50.788176642Z" level=info msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" Jul 2 00:24:50.946738 kubelet[3500]: I0702 00:24:50.945870 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-w7tbv" podStartSLOduration=6.664191886 podStartE2EDuration="30.915418244s" podCreationTimestamp="2024-07-02 00:24:20 +0000 UTC" firstStartedPulling="2024-07-02 00:24:21.367348632 +0000 UTC m=+22.899963190" lastFinishedPulling="2024-07-02 00:24:45.618574978 +0000 UTC m=+47.151189548" observedRunningTime="2024-07-02 00:24:46.292108924 +0000 UTC m=+47.824723493" watchObservedRunningTime="2024-07-02 00:24:50.915418244 +0000 UTC m=+52.448032836" Jul 2 00:24:51.044318 systemd[1]: Started sshd@10-172.31.19.56:22-147.75.109.163:40982.service - OpenSSH per-connection server daemon (147.75.109.163:40982). Jul 2 00:24:51.295074 sshd[4854]: Accepted publickey for core from 147.75.109.163 port 40982 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:24:51.297193 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:24:51.310699 systemd-logind[1957]: New session 11 of user core. Jul 2 00:24:51.316253 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.937 [INFO][4836] k8s.go 608: Cleaning up netns ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.937 [INFO][4836] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" iface="eth0" netns="/var/run/netns/cni-57ca50b3-0521-607d-adc3-15248dfb1e5d" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.941 [INFO][4836] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" iface="eth0" netns="/var/run/netns/cni-57ca50b3-0521-607d-adc3-15248dfb1e5d" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.941 [INFO][4836] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" iface="eth0" netns="/var/run/netns/cni-57ca50b3-0521-607d-adc3-15248dfb1e5d" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.941 [INFO][4836] k8s.go 615: Releasing IP address(es) ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:50.941 [INFO][4836] utils.go 188: Calico CNI releasing IP address ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.324 [INFO][4846] ipam_plugin.go 411: Releasing address using handleID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.326 [INFO][4846] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.326 [INFO][4846] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.348 [WARNING][4846] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.348 [INFO][4846] ipam_plugin.go 439: Releasing address using workloadID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.350 [INFO][4846] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:51.357977 containerd[1977]: 2024-07-02 00:24:51.355 [INFO][4836] k8s.go 621: Teardown processing complete. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:24:51.360396 containerd[1977]: time="2024-07-02T00:24:51.360323918Z" level=info msg="TearDown network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" successfully" Jul 2 00:24:51.360396 containerd[1977]: time="2024-07-02T00:24:51.360395733Z" level=info msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" returns successfully" Jul 2 00:24:51.366684 systemd[1]: run-netns-cni\x2d57ca50b3\x2d0521\x2d607d\x2dadc3\x2d15248dfb1e5d.mount: Deactivated successfully. Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.916 [INFO][4827] k8s.go 608: Cleaning up netns ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.919 [INFO][4827] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" iface="eth0" netns="/var/run/netns/cni-6ac6e793-4bfe-1f5a-5d13-f274e1eabc51" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.920 [INFO][4827] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" iface="eth0" netns="/var/run/netns/cni-6ac6e793-4bfe-1f5a-5d13-f274e1eabc51" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.920 [INFO][4827] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" iface="eth0" netns="/var/run/netns/cni-6ac6e793-4bfe-1f5a-5d13-f274e1eabc51" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.920 [INFO][4827] k8s.go 615: Releasing IP address(es) ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:50.920 [INFO][4827] utils.go 188: Calico CNI releasing IP address ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.324 [INFO][4844] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.326 [INFO][4844] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.351 [INFO][4844] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.371 [WARNING][4844] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.371 [INFO][4844] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.376 [INFO][4844] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:51.385530 containerd[1977]: 2024-07-02 00:24:51.382 [INFO][4827] k8s.go 621: Teardown processing complete. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:24:51.386554 containerd[1977]: time="2024-07-02T00:24:51.386438386Z" level=info msg="TearDown network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" successfully" Jul 2 00:24:51.386554 containerd[1977]: time="2024-07-02T00:24:51.386478664Z" level=info msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" returns successfully" Jul 2 00:24:51.394308 systemd[1]: run-netns-cni\x2d6ac6e793\x2d4bfe\x2d1f5a\x2d5d13\x2df274e1eabc51.mount: Deactivated successfully. Jul 2 00:24:51.394592 containerd[1977]: time="2024-07-02T00:24:51.394360595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx2v5,Uid:9f1917ed-818f-4ea3-bce5-d0bf3952f03b,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:51.395000 containerd[1977]: time="2024-07-02T00:24:51.394958469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4snj8,Uid:6e96f5d9-454f-47bc-982e-380f36019f9f,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:51.787736 containerd[1977]: time="2024-07-02T00:24:51.787668067Z" level=info msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" Jul 2 00:24:51.794286 sshd[4854]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:51.802106 systemd[1]: sshd@10-172.31.19.56:22-147.75.109.163:40982.service: Deactivated successfully. Jul 2 00:24:51.806968 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:24:51.817993 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:24:51.829252 systemd-logind[1957]: Removed session 11. Jul 2 00:24:52.156289 systemd-networkd[1872]: cali29a39a4d6fb: Link UP Jul 2 00:24:52.160128 (udev-worker)[4948]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:52.162375 systemd-networkd[1872]: cali29a39a4d6fb: Gained carrier Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:51.776 [INFO][4884] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0 csi-node-driver- calico-system 9f1917ed-818f-4ea3-bce5-d0bf3952f03b 744 0 2024-07-02 00:24:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-19-56 csi-node-driver-zx2v5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali29a39a4d6fb [] []}} ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:51.776 [INFO][4884] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:51.966 [INFO][4927] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" HandleID="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.001 [INFO][4927] ipam_plugin.go 264: Auto assigning IP ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" HandleID="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035c940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-56", "pod":"csi-node-driver-zx2v5", "timestamp":"2024-07-02 00:24:51.964833634 +0000 UTC"}, Hostname:"ip-172-31-19-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.001 [INFO][4927] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.001 [INFO][4927] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.001 [INFO][4927] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-56' Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.009 [INFO][4927] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.030 [INFO][4927] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.042 [INFO][4927] ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.049 [INFO][4927] ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.056 [INFO][4927] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.056 [INFO][4927] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.064 [INFO][4927] ipam.go 1685: Creating new handle: k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.101 [INFO][4927] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.122 [INFO][4927] ipam.go 1216: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.123 [INFO][4927] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" host="ip-172-31-19-56" Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.126 [INFO][4927] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.238696 containerd[1977]: 2024-07-02 00:24:52.126 [INFO][4927] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" HandleID="k8s-pod-network.9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.133 [INFO][4884] k8s.go 386: Populated endpoint ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f1917ed-818f-4ea3-bce5-d0bf3952f03b", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"", Pod:"csi-node-driver-zx2v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali29a39a4d6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.134 [INFO][4884] k8s.go 387: Calico CNI using IPs: [192.168.72.129/32] ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.134 [INFO][4884] dataplane_linux.go 68: Setting the host side veth name to cali29a39a4d6fb ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.167 [INFO][4884] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.169 [INFO][4884] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f1917ed-818f-4ea3-bce5-d0bf3952f03b", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd", Pod:"csi-node-driver-zx2v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali29a39a4d6fb", MAC:"7a:02:fe:4b:03:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.240031 containerd[1977]: 2024-07-02 00:24:52.225 [INFO][4884] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd" Namespace="calico-system" Pod="csi-node-driver-zx2v5" WorkloadEndpoint="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:24:52.331952 systemd-networkd[1872]: calid7de5357b04: Link UP Jul 2 00:24:52.334613 systemd-networkd[1872]: calid7de5357b04: Gained carrier Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.058 [INFO][4922] k8s.go 608: Cleaning up netns ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.059 [INFO][4922] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" iface="eth0" netns="/var/run/netns/cni-a10769de-b85c-345f-d112-e2da78306762" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.061 [INFO][4922] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" iface="eth0" netns="/var/run/netns/cni-a10769de-b85c-345f-d112-e2da78306762" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.061 [INFO][4922] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" iface="eth0" netns="/var/run/netns/cni-a10769de-b85c-345f-d112-e2da78306762" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.061 [INFO][4922] k8s.go 615: Releasing IP address(es) ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.061 [INFO][4922] utils.go 188: Calico CNI releasing IP address ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.142 [INFO][4941] ipam_plugin.go 411: Releasing address using handleID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.142 [INFO][4941] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.296 [INFO][4941] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.349 [WARNING][4941] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.349 [INFO][4941] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.358 [INFO][4941] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.383660 containerd[1977]: 2024-07-02 00:24:52.371 [INFO][4922] k8s.go 621: Teardown processing complete. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:52.386225 containerd[1977]: time="2024-07-02T00:24:52.386161844Z" level=info msg="TearDown network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" successfully" Jul 2 00:24:52.386225 containerd[1977]: time="2024-07-02T00:24:52.386199221Z" level=info msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" returns successfully" Jul 2 00:24:52.389460 containerd[1977]: time="2024-07-02T00:24:52.389420060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854b76f5d4-mrn7l,Uid:31caa2b5-ff16-4a24-8e21-58c443916572,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:52.393329 containerd[1977]: time="2024-07-02T00:24:52.391610263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:52.393329 containerd[1977]: time="2024-07-02T00:24:52.391840135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.393329 containerd[1977]: time="2024-07-02T00:24:52.391892385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:52.393329 containerd[1977]: time="2024-07-02T00:24:52.391915052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.395664 systemd[1]: run-netns-cni\x2da10769de\x2db85c\x2d345f\x2dd112\x2de2da78306762.mount: Deactivated successfully. Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:51.773 [INFO][4889] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0 coredns-76f75df574- kube-system 6e96f5d9-454f-47bc-982e-380f36019f9f 743 0 2024-07-02 00:24:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-56 coredns-76f75df574-4snj8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7de5357b04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:51.773 [INFO][4889] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:51.995 [INFO][4923] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" HandleID="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.023 [INFO][4923] ipam_plugin.go 264: Auto assigning IP ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" HandleID="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bc320), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-56", "pod":"coredns-76f75df574-4snj8", "timestamp":"2024-07-02 00:24:51.995464008 +0000 UTC"}, Hostname:"ip-172-31-19-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.023 [INFO][4923] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.126 [INFO][4923] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.126 [INFO][4923] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-56' Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.135 [INFO][4923] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.164 [INFO][4923] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.192 [INFO][4923] ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.202 [INFO][4923] ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.230 [INFO][4923] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.230 [INFO][4923] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.254 [INFO][4923] ipam.go 1685: Creating new handle: k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626 Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.270 [INFO][4923] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.295 [INFO][4923] ipam.go 1216: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.296 [INFO][4923] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" host="ip-172-31-19-56" Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.296 [INFO][4923] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.432925 containerd[1977]: 2024-07-02 00:24:52.296 [INFO][4923] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" HandleID="k8s-pod-network.0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.318 [INFO][4889] k8s.go 386: Populated endpoint ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6e96f5d9-454f-47bc-982e-380f36019f9f", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"", Pod:"coredns-76f75df574-4snj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7de5357b04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.318 [INFO][4889] k8s.go 387: Calico CNI using IPs: [192.168.72.130/32] ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.318 [INFO][4889] dataplane_linux.go 68: Setting the host side veth name to calid7de5357b04 ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.341 [INFO][4889] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.346 [INFO][4889] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6e96f5d9-454f-47bc-982e-380f36019f9f", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626", Pod:"coredns-76f75df574-4snj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7de5357b04", MAC:"de:fd:44:54:7d:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.434151 containerd[1977]: 2024-07-02 00:24:52.413 [INFO][4889] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626" Namespace="kube-system" Pod="coredns-76f75df574-4snj8" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:24:52.496797 systemd[1]: Started cri-containerd-9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd.scope - libcontainer container 9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd. Jul 2 00:24:52.620579 containerd[1977]: time="2024-07-02T00:24:52.620246503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:52.620579 containerd[1977]: time="2024-07-02T00:24:52.620322009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.620579 containerd[1977]: time="2024-07-02T00:24:52.620344855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:52.620579 containerd[1977]: time="2024-07-02T00:24:52.620359296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:52.675397 containerd[1977]: time="2024-07-02T00:24:52.675352002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zx2v5,Uid:9f1917ed-818f-4ea3-bce5-d0bf3952f03b,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd\"" Jul 2 00:24:52.699983 systemd[1]: Started cri-containerd-0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626.scope - libcontainer container 0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626. Jul 2 00:24:52.768258 containerd[1977]: time="2024-07-02T00:24:52.765466724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:24:52.867129 containerd[1977]: time="2024-07-02T00:24:52.867085451Z" level=info msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" Jul 2 00:24:53.024155 containerd[1977]: time="2024-07-02T00:24:53.024110833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4snj8,Uid:6e96f5d9-454f-47bc-982e-380f36019f9f,Namespace:kube-system,Attempt:1,} returns sandbox id \"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626\"" Jul 2 00:24:53.039742 containerd[1977]: time="2024-07-02T00:24:53.039683555Z" level=info msg="CreateContainer within sandbox \"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:53.075845 systemd-networkd[1872]: calie497578cdae: Link UP Jul 2 00:24:53.076512 systemd-networkd[1872]: calie497578cdae: Gained carrier Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.590 [INFO][4988] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0 calico-kube-controllers-854b76f5d4- calico-system 31caa2b5-ff16-4a24-8e21-58c443916572 752 0 2024-07-02 00:24:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:854b76f5d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-56 calico-kube-controllers-854b76f5d4-mrn7l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie497578cdae [] []}} ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.592 [INFO][4988] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.752 [INFO][5044] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" HandleID="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.817 [INFO][5044] ipam_plugin.go 264: Auto assigning IP ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" HandleID="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000460690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-56", "pod":"calico-kube-controllers-854b76f5d4-mrn7l", "timestamp":"2024-07-02 00:24:52.752205213 +0000 UTC"}, Hostname:"ip-172-31-19-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.818 [INFO][5044] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.818 [INFO][5044] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.818 [INFO][5044] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-56' Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.825 [INFO][5044] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.872 [INFO][5044] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.919 [INFO][5044] ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.934 [INFO][5044] ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.950 [INFO][5044] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.952 [INFO][5044] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.959 [INFO][5044] ipam.go 1685: Creating new handle: k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16 Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:52.987 [INFO][5044] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:53.018 [INFO][5044] ipam.go 1216: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:53.019 [INFO][5044] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" host="ip-172-31-19-56" Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:53.019 [INFO][5044] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:53.129769 containerd[1977]: 2024-07-02 00:24:53.020 [INFO][5044] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" HandleID="k8s-pod-network.574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.040 [INFO][4988] k8s.go 386: Populated endpoint ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0", GenerateName:"calico-kube-controllers-854b76f5d4-", Namespace:"calico-system", SelfLink:"", UID:"31caa2b5-ff16-4a24-8e21-58c443916572", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854b76f5d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"", Pod:"calico-kube-controllers-854b76f5d4-mrn7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie497578cdae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.040 [INFO][4988] k8s.go 387: Calico CNI using IPs: [192.168.72.131/32] ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.040 [INFO][4988] dataplane_linux.go 68: Setting the host side veth name to calie497578cdae ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.079 [INFO][4988] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.090 [INFO][4988] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0", GenerateName:"calico-kube-controllers-854b76f5d4-", Namespace:"calico-system", SelfLink:"", UID:"31caa2b5-ff16-4a24-8e21-58c443916572", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854b76f5d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16", Pod:"calico-kube-controllers-854b76f5d4-mrn7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie497578cdae", MAC:"56:36:98:6d:6a:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:53.134091 containerd[1977]: 2024-07-02 00:24:53.120 [INFO][4988] k8s.go 500: Wrote updated endpoint to datastore ContainerID="574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16" Namespace="calico-system" Pod="calico-kube-controllers-854b76f5d4-mrn7l" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:53.168797 containerd[1977]: time="2024-07-02T00:24:53.166265642Z" level=info msg="CreateContainer within sandbox \"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e87243e66314aba7533e4a6faecfdec3f50cfa7237d624aff453c8d0861e30b\"" Jul 2 00:24:53.168797 containerd[1977]: time="2024-07-02T00:24:53.168734727Z" level=info msg="StartContainer for \"8e87243e66314aba7533e4a6faecfdec3f50cfa7237d624aff453c8d0861e30b\"" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.183 [INFO][5083] k8s.go 608: Cleaning up netns ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.185 [INFO][5083] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" iface="eth0" netns="/var/run/netns/cni-015ac032-c8c5-d9ae-b8e7-9a81d9bbf2c5" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.190 [INFO][5083] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" iface="eth0" netns="/var/run/netns/cni-015ac032-c8c5-d9ae-b8e7-9a81d9bbf2c5" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.190 [INFO][5083] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" iface="eth0" netns="/var/run/netns/cni-015ac032-c8c5-d9ae-b8e7-9a81d9bbf2c5" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.190 [INFO][5083] k8s.go 615: Releasing IP address(es) ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.191 [INFO][5083] utils.go 188: Calico CNI releasing IP address ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.322 [INFO][5110] ipam_plugin.go 411: Releasing address using handleID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.326 [INFO][5110] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.326 [INFO][5110] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.348 [WARNING][5110] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.348 [INFO][5110] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.358 [INFO][5110] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:53.453900 containerd[1977]: 2024-07-02 00:24:53.372 [INFO][5083] k8s.go 621: Teardown processing complete. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:24:53.456082 containerd[1977]: time="2024-07-02T00:24:53.454937173Z" level=info msg="TearDown network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" successfully" Jul 2 00:24:53.456082 containerd[1977]: time="2024-07-02T00:24:53.454980317Z" level=info msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" returns successfully" Jul 2 00:24:53.473276 systemd[1]: run-netns-cni\x2d015ac032\x2dc8c5\x2dd9ae\x2db8e7\x2d9a81d9bbf2c5.mount: Deactivated successfully. Jul 2 00:24:53.478940 containerd[1977]: time="2024-07-02T00:24:53.478892890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8dl4w,Uid:fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19,Namespace:kube-system,Attempt:1,}" Jul 2 00:24:53.482302 systemd[1]: Started cri-containerd-8e87243e66314aba7533e4a6faecfdec3f50cfa7237d624aff453c8d0861e30b.scope - libcontainer container 8e87243e66314aba7533e4a6faecfdec3f50cfa7237d624aff453c8d0861e30b. Jul 2 00:24:53.492366 systemd-networkd[1872]: cali29a39a4d6fb: Gained IPv6LL Jul 2 00:24:53.528977 containerd[1977]: time="2024-07-02T00:24:53.526712844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:53.528977 containerd[1977]: time="2024-07-02T00:24:53.526811391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:53.528977 containerd[1977]: time="2024-07-02T00:24:53.526846957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:53.528977 containerd[1977]: time="2024-07-02T00:24:53.526869489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:53.663435 systemd[1]: Started cri-containerd-574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16.scope - libcontainer container 574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16. Jul 2 00:24:53.735121 containerd[1977]: time="2024-07-02T00:24:53.734738212Z" level=info msg="StartContainer for \"8e87243e66314aba7533e4a6faecfdec3f50cfa7237d624aff453c8d0861e30b\" returns successfully" Jul 2 00:24:53.813514 systemd-networkd[1872]: calid7de5357b04: Gained IPv6LL Jul 2 00:24:53.994279 containerd[1977]: time="2024-07-02T00:24:53.992269233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854b76f5d4-mrn7l,Uid:31caa2b5-ff16-4a24-8e21-58c443916572,Namespace:calico-system,Attempt:1,} returns sandbox id \"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16\"" Jul 2 00:24:54.065873 systemd-networkd[1872]: calida51d9ec864: Link UP Jul 2 00:24:54.066382 systemd-networkd[1872]: calida51d9ec864: Gained carrier Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.741 [INFO][5160] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0 coredns-76f75df574- kube-system fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19 770 0 2024-07-02 00:24:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-56 coredns-76f75df574-8dl4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida51d9ec864 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.744 [INFO][5160] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.905 [INFO][5193] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" HandleID="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.946 [INFO][5193] ipam_plugin.go 264: Auto assigning IP ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" HandleID="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051430), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-56", "pod":"coredns-76f75df574-8dl4w", "timestamp":"2024-07-02 00:24:53.905443898 +0000 UTC"}, Hostname:"ip-172-31-19-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.947 [INFO][5193] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.948 [INFO][5193] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.948 [INFO][5193] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-56' Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.960 [INFO][5193] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:53.976 [INFO][5193] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.010 [INFO][5193] ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.014 [INFO][5193] ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.021 [INFO][5193] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.021 [INFO][5193] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.027 [INFO][5193] ipam.go 1685: Creating new handle: k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.035 [INFO][5193] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.052 [INFO][5193] ipam.go 1216: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.052 [INFO][5193] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" host="ip-172-31-19-56" Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.052 [INFO][5193] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:54.106177 containerd[1977]: 2024-07-02 00:24:54.052 [INFO][5193] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" HandleID="k8s-pod-network.508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.055 [INFO][5160] k8s.go 386: Populated endpoint ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"", Pod:"coredns-76f75df574-8dl4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida51d9ec864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.055 [INFO][5160] k8s.go 387: Calico CNI using IPs: [192.168.72.132/32] ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.055 [INFO][5160] dataplane_linux.go 68: Setting the host side veth name to calida51d9ec864 ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.065 [INFO][5160] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.068 [INFO][5160] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a", Pod:"coredns-76f75df574-8dl4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida51d9ec864", MAC:"2e:67:4d:54:06:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:54.108951 containerd[1977]: 2024-07-02 00:24:54.099 [INFO][5160] k8s.go 500: Wrote updated endpoint to datastore ContainerID="508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a" Namespace="kube-system" Pod="coredns-76f75df574-8dl4w" WorkloadEndpoint="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:24:54.171412 containerd[1977]: time="2024-07-02T00:24:54.169633896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:54.171412 containerd[1977]: time="2024-07-02T00:24:54.169722113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.171412 containerd[1977]: time="2024-07-02T00:24:54.169770460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:54.171412 containerd[1977]: time="2024-07-02T00:24:54.169790369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.253426 systemd[1]: Started cri-containerd-508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a.scope - libcontainer container 508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a. Jul 2 00:24:54.328139 systemd-networkd[1872]: calie497578cdae: Gained IPv6LL Jul 2 00:24:54.441302 containerd[1977]: time="2024-07-02T00:24:54.441196520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8dl4w,Uid:fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19,Namespace:kube-system,Attempt:1,} returns sandbox id \"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a\"" Jul 2 00:24:54.458267 containerd[1977]: time="2024-07-02T00:24:54.458225291Z" level=info msg="CreateContainer within sandbox \"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:54.522116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188214739.mount: Deactivated successfully. Jul 2 00:24:54.526867 containerd[1977]: time="2024-07-02T00:24:54.526679780Z" level=info msg="CreateContainer within sandbox \"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb\"" Jul 2 00:24:54.529156 containerd[1977]: time="2024-07-02T00:24:54.529112080Z" level=info msg="StartContainer for \"7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb\"" Jul 2 00:24:54.628167 kubelet[3500]: I0702 00:24:54.626470 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4snj8" podStartSLOduration=42.626416438 podStartE2EDuration="42.626416438s" podCreationTimestamp="2024-07-02 00:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:54.626091956 +0000 UTC m=+56.158706623" watchObservedRunningTime="2024-07-02 00:24:54.626416438 +0000 UTC m=+56.159031011" Jul 2 00:24:54.671853 systemd[1]: Started cri-containerd-7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb.scope - libcontainer container 7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb. Jul 2 00:24:54.762203 containerd[1977]: time="2024-07-02T00:24:54.761602624Z" level=info msg="StartContainer for \"7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb\" returns successfully" Jul 2 00:24:55.231075 containerd[1977]: time="2024-07-02T00:24:55.230800068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.233890 containerd[1977]: time="2024-07-02T00:24:55.233806853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:24:55.237729 containerd[1977]: time="2024-07-02T00:24:55.237528778Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.260974 containerd[1977]: time="2024-07-02T00:24:55.260854916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:55.262076 containerd[1977]: time="2024-07-02T00:24:55.261789446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.496268954s" Jul 2 00:24:55.262076 containerd[1977]: time="2024-07-02T00:24:55.261837275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:24:55.264223 containerd[1977]: time="2024-07-02T00:24:55.264149508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:24:55.266685 containerd[1977]: time="2024-07-02T00:24:55.266513374Z" level=info msg="CreateContainer within sandbox \"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:24:55.411808 systemd[1]: run-containerd-runc-k8s.io-7e365a0f2344754ced02bda1b3406da86687eebe792336a9cbf3a4dc96fecccb-runc.0RY2kD.mount: Deactivated successfully. Jul 2 00:24:55.517077 containerd[1977]: time="2024-07-02T00:24:55.514381531Z" level=info msg="CreateContainer within sandbox \"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563\"" Jul 2 00:24:55.524155 containerd[1977]: time="2024-07-02T00:24:55.523879033Z" level=info msg="StartContainer for \"dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563\"" Jul 2 00:24:55.745371 systemd[1]: run-containerd-runc-k8s.io-dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563-runc.285YwE.mount: Deactivated successfully. Jul 2 00:24:55.767696 systemd[1]: Started cri-containerd-dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563.scope - libcontainer container dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563. Jul 2 00:24:55.860533 kubelet[3500]: I0702 00:24:55.856677 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8dl4w" podStartSLOduration=43.856358017 podStartE2EDuration="43.856358017s" podCreationTimestamp="2024-07-02 00:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:55.65751908 +0000 UTC m=+57.190133650" watchObservedRunningTime="2024-07-02 00:24:55.856358017 +0000 UTC m=+57.388972590" Jul 2 00:24:56.052270 systemd-networkd[1872]: calida51d9ec864: Gained IPv6LL Jul 2 00:24:56.057532 containerd[1977]: time="2024-07-02T00:24:56.057492904Z" level=info msg="StartContainer for \"dce5087328dfeca5797101e623aa1bd6ec3c8dda32420e32fdabf097976a1563\" returns successfully" Jul 2 00:24:56.849460 systemd[1]: Started sshd@11-172.31.19.56:22-147.75.109.163:45632.service - OpenSSH per-connection server daemon (147.75.109.163:45632). Jul 2 00:24:57.203302 sshd[5359]: Accepted publickey for core from 147.75.109.163 port 45632 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:24:57.211982 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:24:57.243274 systemd-logind[1957]: New session 12 of user core. Jul 2 00:24:57.251341 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:24:57.979441 sshd[5359]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:57.987199 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:24:57.990771 systemd[1]: sshd@11-172.31.19.56:22-147.75.109.163:45632.service: Deactivated successfully. Jul 2 00:24:57.998387 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:24:58.005884 systemd-logind[1957]: Removed session 12. Jul 2 00:24:58.891752 containerd[1977]: time="2024-07-02T00:24:58.891513327Z" level=info msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" Jul 2 00:24:58.953160 ntpd[1951]: Listen normally on 8 vxlan.calico 192.168.72.128:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 8 vxlan.calico 192.168.72.128:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 9 vxlan.calico [fe80::649a:cbff:fed2:67a4%4]:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 10 cali29a39a4d6fb [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 11 calid7de5357b04 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 12 calie497578cdae [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:24:58.956146 ntpd[1951]: 2 Jul 00:24:58 ntpd[1951]: Listen normally on 13 calida51d9ec864 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:24:58.953294 ntpd[1951]: Listen normally on 9 vxlan.calico [fe80::649a:cbff:fed2:67a4%4]:123 Jul 2 00:24:58.953379 ntpd[1951]: Listen normally on 10 cali29a39a4d6fb [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:24:58.953425 ntpd[1951]: Listen normally on 11 calid7de5357b04 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:24:58.953486 ntpd[1951]: Listen normally on 12 calie497578cdae [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:24:58.953741 ntpd[1951]: Listen normally on 13 calida51d9ec864 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.208 [WARNING][5417] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0", GenerateName:"calico-kube-controllers-854b76f5d4-", Namespace:"calico-system", SelfLink:"", UID:"31caa2b5-ff16-4a24-8e21-58c443916572", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854b76f5d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16", Pod:"calico-kube-controllers-854b76f5d4-mrn7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie497578cdae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.208 [INFO][5417] k8s.go 608: Cleaning up netns ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.208 [INFO][5417] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" iface="eth0" netns="" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.209 [INFO][5417] k8s.go 615: Releasing IP address(es) ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.209 [INFO][5417] utils.go 188: Calico CNI releasing IP address ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.435 [INFO][5426] ipam_plugin.go 411: Releasing address using handleID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.438 [INFO][5426] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.439 [INFO][5426] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.493 [WARNING][5426] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.493 [INFO][5426] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.509 [INFO][5426] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:59.540963 containerd[1977]: 2024-07-02 00:24:59.521 [INFO][5417] k8s.go 621: Teardown processing complete. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.540963 containerd[1977]: time="2024-07-02T00:24:59.540217827Z" level=info msg="TearDown network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" successfully" Jul 2 00:24:59.540963 containerd[1977]: time="2024-07-02T00:24:59.540246984Z" level=info msg="StopPodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" returns successfully" Jul 2 00:24:59.588258 containerd[1977]: time="2024-07-02T00:24:59.587392248Z" level=info msg="RemovePodSandbox for \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" Jul 2 00:24:59.588258 containerd[1977]: time="2024-07-02T00:24:59.587472510Z" level=info msg="Forcibly stopping sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\"" Jul 2 00:24:59.772315 containerd[1977]: time="2024-07-02T00:24:59.772261549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:59.775196 containerd[1977]: time="2024-07-02T00:24:59.775128118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:24:59.776237 containerd[1977]: time="2024-07-02T00:24:59.776199969Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:59.783321 containerd[1977]: time="2024-07-02T00:24:59.783029705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:59.787776 containerd[1977]: time="2024-07-02T00:24:59.785416398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.521223488s" Jul 2 00:24:59.787776 containerd[1977]: time="2024-07-02T00:24:59.785467593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:24:59.792147 containerd[1977]: time="2024-07-02T00:24:59.791258031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:24:59.839297 containerd[1977]: time="2024-07-02T00:24:59.839253472Z" level=info msg="CreateContainer within sandbox \"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:24:59.911909 containerd[1977]: time="2024-07-02T00:24:59.910040152Z" level=info msg="CreateContainer within sandbox \"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527\"" Jul 2 00:24:59.913695 containerd[1977]: time="2024-07-02T00:24:59.913123160Z" level=info msg="StartContainer for \"dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527\"" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.771 [WARNING][5455] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0", GenerateName:"calico-kube-controllers-854b76f5d4-", Namespace:"calico-system", SelfLink:"", UID:"31caa2b5-ff16-4a24-8e21-58c443916572", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854b76f5d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"574b2b8fe15a909694f1e78a309fe6ebb5b5de3ceffc602b97a24d499c183e16", Pod:"calico-kube-controllers-854b76f5d4-mrn7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie497578cdae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.771 [INFO][5455] k8s.go 608: Cleaning up netns ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.771 [INFO][5455] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" iface="eth0" netns="" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.771 [INFO][5455] k8s.go 615: Releasing IP address(es) ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.771 [INFO][5455] utils.go 188: Calico CNI releasing IP address ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.886 [INFO][5461] ipam_plugin.go 411: Releasing address using handleID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.886 [INFO][5461] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.886 [INFO][5461] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.923 [WARNING][5461] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.923 [INFO][5461] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" HandleID="k8s-pod-network.2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Workload="ip--172--31--19--56-k8s-calico--kube--controllers--854b76f5d4--mrn7l-eth0" Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.928 [INFO][5461] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:59.951862 containerd[1977]: 2024-07-02 00:24:59.938 [INFO][5455] k8s.go 621: Teardown processing complete. ContainerID="2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e" Jul 2 00:24:59.956279 containerd[1977]: time="2024-07-02T00:24:59.951839612Z" level=info msg="TearDown network for sandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" successfully" Jul 2 00:25:00.011810 containerd[1977]: time="2024-07-02T00:25:00.010740970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:00.011810 containerd[1977]: time="2024-07-02T00:25:00.010875902Z" level=info msg="RemovePodSandbox \"2a181a67378db5b0fa9017564efd03f4391a41376bc36f45d3856c477f90c40e\" returns successfully" Jul 2 00:25:00.013791 containerd[1977]: time="2024-07-02T00:25:00.013720540Z" level=info msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" Jul 2 00:25:00.035030 systemd[1]: Started cri-containerd-dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527.scope - libcontainer container dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527. Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.149 [WARNING][5501] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6e96f5d9-454f-47bc-982e-380f36019f9f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626", Pod:"coredns-76f75df574-4snj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7de5357b04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.149 [INFO][5501] k8s.go 608: Cleaning up netns ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.149 [INFO][5501] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" iface="eth0" netns="" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.150 [INFO][5501] k8s.go 615: Releasing IP address(es) ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.151 [INFO][5501] utils.go 188: Calico CNI releasing IP address ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.240 [INFO][5512] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.241 [INFO][5512] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.241 [INFO][5512] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.267 [WARNING][5512] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.267 [INFO][5512] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.271 [INFO][5512] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.277258 containerd[1977]: 2024-07-02 00:25:00.273 [INFO][5501] k8s.go 621: Teardown processing complete. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.278220 containerd[1977]: time="2024-07-02T00:25:00.278179032Z" level=info msg="TearDown network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" successfully" Jul 2 00:25:00.278352 containerd[1977]: time="2024-07-02T00:25:00.278331654Z" level=info msg="StopPodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" returns successfully" Jul 2 00:25:00.279121 containerd[1977]: time="2024-07-02T00:25:00.279086215Z" level=info msg="RemovePodSandbox for \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" Jul 2 00:25:00.279262 containerd[1977]: time="2024-07-02T00:25:00.279245403Z" level=info msg="Forcibly stopping sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\"" Jul 2 00:25:00.313625 containerd[1977]: time="2024-07-02T00:25:00.313575149Z" level=info msg="StartContainer for \"dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527\" returns successfully" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.393 [WARNING][5536] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6e96f5d9-454f-47bc-982e-380f36019f9f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"0390d4636c578ead0232b25e9699a896616eb426e01982c0064716a097e69626", Pod:"coredns-76f75df574-4snj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7de5357b04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.394 [INFO][5536] k8s.go 608: Cleaning up netns ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.394 [INFO][5536] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" iface="eth0" netns="" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.394 [INFO][5536] k8s.go 615: Releasing IP address(es) ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.394 [INFO][5536] utils.go 188: Calico CNI releasing IP address ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.442 [INFO][5545] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.442 [INFO][5545] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.443 [INFO][5545] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.457 [WARNING][5545] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.457 [INFO][5545] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" HandleID="k8s-pod-network.ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--4snj8-eth0" Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.464 [INFO][5545] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.469849 containerd[1977]: 2024-07-02 00:25:00.467 [INFO][5536] k8s.go 621: Teardown processing complete. ContainerID="ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c" Jul 2 00:25:00.474301 containerd[1977]: time="2024-07-02T00:25:00.472392857Z" level=info msg="TearDown network for sandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" successfully" Jul 2 00:25:00.488183 containerd[1977]: time="2024-07-02T00:25:00.487878751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:00.488183 containerd[1977]: time="2024-07-02T00:25:00.487970648Z" level=info msg="RemovePodSandbox \"ce85c4410f7c4c33f14a996f9ec5dfe67664db266f22791932c74c3425421b7c\" returns successfully" Jul 2 00:25:00.488793 containerd[1977]: time="2024-07-02T00:25:00.488612042Z" level=info msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.589 [WARNING][5564] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a", Pod:"coredns-76f75df574-8dl4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida51d9ec864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.589 [INFO][5564] k8s.go 608: Cleaning up netns ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.589 [INFO][5564] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" iface="eth0" netns="" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.589 [INFO][5564] k8s.go 615: Releasing IP address(es) ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.589 [INFO][5564] utils.go 188: Calico CNI releasing IP address ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.698 [INFO][5571] ipam_plugin.go 411: Releasing address using handleID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.702 [INFO][5571] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.703 [INFO][5571] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.765 [WARNING][5571] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.765 [INFO][5571] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.772 [INFO][5571] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:00.811772 containerd[1977]: 2024-07-02 00:25:00.788 [INFO][5564] k8s.go 621: Teardown processing complete. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:00.813951 containerd[1977]: time="2024-07-02T00:25:00.812972688Z" level=info msg="TearDown network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" successfully" Jul 2 00:25:00.813951 containerd[1977]: time="2024-07-02T00:25:00.813003183Z" level=info msg="StopPodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" returns successfully" Jul 2 00:25:00.815023 containerd[1977]: time="2024-07-02T00:25:00.814773419Z" level=info msg="RemovePodSandbox for \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" Jul 2 00:25:00.815023 containerd[1977]: time="2024-07-02T00:25:00.814821418Z" level=info msg="Forcibly stopping sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\"" Jul 2 00:25:01.000744 kubelet[3500]: I0702 00:25:01.000694 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-854b76f5d4-mrn7l" podStartSLOduration=34.216921894 podStartE2EDuration="40.000635459s" podCreationTimestamp="2024-07-02 00:24:21 +0000 UTC" firstStartedPulling="2024-07-02 00:24:54.002088184 +0000 UTC m=+55.534702736" lastFinishedPulling="2024-07-02 00:24:59.78580175 +0000 UTC m=+61.318416301" observedRunningTime="2024-07-02 00:25:00.739130894 +0000 UTC m=+62.271745464" watchObservedRunningTime="2024-07-02 00:25:01.000635459 +0000 UTC m=+62.533250041" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.024 [WARNING][5604] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc4fe4ff-5de6-468b-9f5f-7ee5153f7d19", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"508476811223be820a3854d017d832ec6d580d62a4e0b1a15faaf22fde82624a", Pod:"coredns-76f75df574-8dl4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida51d9ec864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.025 [INFO][5604] k8s.go 608: Cleaning up netns ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.026 [INFO][5604] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" iface="eth0" netns="" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.026 [INFO][5604] k8s.go 615: Releasing IP address(es) ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.026 [INFO][5604] utils.go 188: Calico CNI releasing IP address ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.083 [INFO][5613] ipam_plugin.go 411: Releasing address using handleID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.084 [INFO][5613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.084 [INFO][5613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.091 [WARNING][5613] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.091 [INFO][5613] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" HandleID="k8s-pod-network.b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Workload="ip--172--31--19--56-k8s-coredns--76f75df574--8dl4w-eth0" Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.095 [INFO][5613] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.105943 containerd[1977]: 2024-07-02 00:25:01.099 [INFO][5604] k8s.go 621: Teardown processing complete. ContainerID="b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20" Jul 2 00:25:01.105943 containerd[1977]: time="2024-07-02T00:25:01.103860839Z" level=info msg="TearDown network for sandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" successfully" Jul 2 00:25:01.126116 containerd[1977]: time="2024-07-02T00:25:01.125447159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:01.126116 containerd[1977]: time="2024-07-02T00:25:01.125533077Z" level=info msg="RemovePodSandbox \"b10e63bbc224cd38816a5b114046527347a6f8dc7de2d7da85b3b4b09b5f8e20\" returns successfully" Jul 2 00:25:01.126992 containerd[1977]: time="2024-07-02T00:25:01.126675691Z" level=info msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.191 [WARNING][5632] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f1917ed-818f-4ea3-bce5-d0bf3952f03b", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd", Pod:"csi-node-driver-zx2v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali29a39a4d6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.191 [INFO][5632] k8s.go 608: Cleaning up netns ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.191 [INFO][5632] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" iface="eth0" netns="" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.192 [INFO][5632] k8s.go 615: Releasing IP address(es) ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.192 [INFO][5632] utils.go 188: Calico CNI releasing IP address ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.282 [INFO][5639] ipam_plugin.go 411: Releasing address using handleID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.282 [INFO][5639] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.283 [INFO][5639] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.297 [WARNING][5639] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.297 [INFO][5639] ipam_plugin.go 439: Releasing address using workloadID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.305 [INFO][5639] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.314377 containerd[1977]: 2024-07-02 00:25:01.309 [INFO][5632] k8s.go 621: Teardown processing complete. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.314377 containerd[1977]: time="2024-07-02T00:25:01.314363867Z" level=info msg="TearDown network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" successfully" Jul 2 00:25:01.317391 containerd[1977]: time="2024-07-02T00:25:01.314427234Z" level=info msg="StopPodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" returns successfully" Jul 2 00:25:01.317391 containerd[1977]: time="2024-07-02T00:25:01.315958572Z" level=info msg="RemovePodSandbox for \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" Jul 2 00:25:01.317391 containerd[1977]: time="2024-07-02T00:25:01.316013012Z" level=info msg="Forcibly stopping sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\"" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.420 [WARNING][5657] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f1917ed-818f-4ea3-bce5-d0bf3952f03b", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd", Pod:"csi-node-driver-zx2v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali29a39a4d6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.421 [INFO][5657] k8s.go 608: Cleaning up netns ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.421 [INFO][5657] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" iface="eth0" netns="" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.421 [INFO][5657] k8s.go 615: Releasing IP address(es) ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.421 [INFO][5657] utils.go 188: Calico CNI releasing IP address ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.529 [INFO][5663] ipam_plugin.go 411: Releasing address using handleID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.530 [INFO][5663] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.530 [INFO][5663] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.546 [WARNING][5663] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.546 [INFO][5663] ipam_plugin.go 439: Releasing address using workloadID ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" HandleID="k8s-pod-network.176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Workload="ip--172--31--19--56-k8s-csi--node--driver--zx2v5-eth0" Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.550 [INFO][5663] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:01.572234 containerd[1977]: 2024-07-02 00:25:01.553 [INFO][5657] k8s.go 621: Teardown processing complete. ContainerID="176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c" Jul 2 00:25:01.572911 containerd[1977]: time="2024-07-02T00:25:01.572294297Z" level=info msg="TearDown network for sandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" successfully" Jul 2 00:25:01.588860 containerd[1977]: time="2024-07-02T00:25:01.587965258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:01.588860 containerd[1977]: time="2024-07-02T00:25:01.588103641Z" level=info msg="RemovePodSandbox \"176f5063a15d0d5bb4194e863913eda9533656d6be5adcacf2e39c691a65a65c\" returns successfully" Jul 2 00:25:02.206540 containerd[1977]: time="2024-07-02T00:25:02.206482325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:02.208502 containerd[1977]: time="2024-07-02T00:25:02.208140384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:25:02.212772 containerd[1977]: time="2024-07-02T00:25:02.210567398Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:02.217816 containerd[1977]: time="2024-07-02T00:25:02.217757698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:02.221333 containerd[1977]: time="2024-07-02T00:25:02.221275203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.429966242s" Jul 2 00:25:02.221333 containerd[1977]: time="2024-07-02T00:25:02.221335047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:25:02.254984 containerd[1977]: time="2024-07-02T00:25:02.254926096Z" level=info msg="CreateContainer within sandbox \"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:25:02.308170 containerd[1977]: time="2024-07-02T00:25:02.308112819Z" level=info msg="CreateContainer within sandbox \"9f8f64d453a2cf247aaff10b36f80019c0d650df778ee3364cbd63f4d65663cd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59\"" Jul 2 00:25:02.313212 containerd[1977]: time="2024-07-02T00:25:02.313161399Z" level=info msg="StartContainer for \"2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59\"" Jul 2 00:25:02.593792 systemd[1]: run-containerd-runc-k8s.io-2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59-runc.7QqTFl.mount: Deactivated successfully. Jul 2 00:25:02.603463 systemd[1]: Started cri-containerd-2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59.scope - libcontainer container 2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59. Jul 2 00:25:02.666176 containerd[1977]: time="2024-07-02T00:25:02.665808956Z" level=info msg="StartContainer for \"2ecd3fed1c69c3d09cb7a1c7fd38f3c2723ee59e70d8db3ef568792401f4de59\" returns successfully" Jul 2 00:25:03.014654 systemd[1]: Started sshd@12-172.31.19.56:22-147.75.109.163:48214.service - OpenSSH per-connection server daemon (147.75.109.163:48214). Jul 2 00:25:03.307017 sshd[5711]: Accepted publickey for core from 147.75.109.163 port 48214 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:03.314251 kubelet[3500]: I0702 00:25:03.314203 3500 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:25:03.314879 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:03.322165 kubelet[3500]: I0702 00:25:03.321659 3500 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:25:03.335249 systemd-logind[1957]: New session 13 of user core. Jul 2 00:25:03.341544 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:04.175636 sshd[5711]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:04.180814 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:04.182374 systemd[1]: sshd@12-172.31.19.56:22-147.75.109.163:48214.service: Deactivated successfully. Jul 2 00:25:04.187191 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:04.190213 systemd-logind[1957]: Removed session 13. Jul 2 00:25:04.215034 systemd[1]: Started sshd@13-172.31.19.56:22-147.75.109.163:48216.service - OpenSSH per-connection server daemon (147.75.109.163:48216). Jul 2 00:25:04.397233 sshd[5725]: Accepted publickey for core from 147.75.109.163 port 48216 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:04.400254 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:04.411963 systemd-logind[1957]: New session 14 of user core. Jul 2 00:25:04.416308 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:04.988416 sshd[5725]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:05.000491 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:25:05.004162 systemd[1]: sshd@13-172.31.19.56:22-147.75.109.163:48216.service: Deactivated successfully. Jul 2 00:25:05.010157 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:25:05.032094 systemd-logind[1957]: Removed session 14. Jul 2 00:25:05.042476 systemd[1]: Started sshd@14-172.31.19.56:22-147.75.109.163:48226.service - OpenSSH per-connection server daemon (147.75.109.163:48226). Jul 2 00:25:05.237188 sshd[5738]: Accepted publickey for core from 147.75.109.163 port 48226 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:05.240120 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:05.260885 systemd-logind[1957]: New session 15 of user core. Jul 2 00:25:05.275279 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:25:05.574856 sshd[5738]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:05.581562 systemd[1]: sshd@14-172.31.19.56:22-147.75.109.163:48226.service: Deactivated successfully. Jul 2 00:25:05.584900 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:25:05.587324 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:25:05.589849 systemd-logind[1957]: Removed session 15. Jul 2 00:25:10.628468 systemd[1]: Started sshd@15-172.31.19.56:22-147.75.109.163:48236.service - OpenSSH per-connection server daemon (147.75.109.163:48236). Jul 2 00:25:10.796821 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 48236 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:10.798288 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:10.802930 systemd-logind[1957]: New session 16 of user core. Jul 2 00:25:10.807972 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:25:11.054392 sshd[5788]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:11.064079 systemd[1]: sshd@15-172.31.19.56:22-147.75.109.163:48236.service: Deactivated successfully. Jul 2 00:25:11.067829 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:25:11.070609 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:25:11.072148 systemd-logind[1957]: Removed session 16. Jul 2 00:25:16.096403 systemd[1]: Started sshd@16-172.31.19.56:22-147.75.109.163:41194.service - OpenSSH per-connection server daemon (147.75.109.163:41194). Jul 2 00:25:16.300514 sshd[5805]: Accepted publickey for core from 147.75.109.163 port 41194 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:16.303511 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:16.310262 systemd-logind[1957]: New session 17 of user core. Jul 2 00:25:16.318591 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:25:16.905391 sshd[5805]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:16.911272 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:25:16.912424 systemd[1]: sshd@16-172.31.19.56:22-147.75.109.163:41194.service: Deactivated successfully. Jul 2 00:25:16.915129 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:25:16.925152 systemd-logind[1957]: Removed session 17. Jul 2 00:25:21.960271 systemd[1]: Started sshd@17-172.31.19.56:22-147.75.109.163:41198.service - OpenSSH per-connection server daemon (147.75.109.163:41198). Jul 2 00:25:22.228422 sshd[5823]: Accepted publickey for core from 147.75.109.163 port 41198 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:22.237307 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:22.252950 systemd-logind[1957]: New session 18 of user core. Jul 2 00:25:22.263776 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:25:22.797433 sshd[5823]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:22.815975 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:25:22.816165 systemd[1]: sshd@17-172.31.19.56:22-147.75.109.163:41198.service: Deactivated successfully. Jul 2 00:25:22.824528 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:25:22.827666 systemd-logind[1957]: Removed session 18. Jul 2 00:25:27.839664 systemd[1]: Started sshd@18-172.31.19.56:22-147.75.109.163:57452.service - OpenSSH per-connection server daemon (147.75.109.163:57452). Jul 2 00:25:28.047467 sshd[5837]: Accepted publickey for core from 147.75.109.163 port 57452 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:28.050958 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:28.061445 systemd-logind[1957]: New session 19 of user core. Jul 2 00:25:28.067396 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:25:28.383392 sshd[5837]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:28.440594 systemd[1]: Started sshd@19-172.31.19.56:22-147.75.109.163:57456.service - OpenSSH per-connection server daemon (147.75.109.163:57456). Jul 2 00:25:28.442799 systemd[1]: sshd@18-172.31.19.56:22-147.75.109.163:57452.service: Deactivated successfully. Jul 2 00:25:28.450544 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:25:28.456991 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:25:28.460999 systemd-logind[1957]: Removed session 19. Jul 2 00:25:28.653677 sshd[5852]: Accepted publickey for core from 147.75.109.163 port 57456 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:28.660376 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:28.671353 systemd-logind[1957]: New session 20 of user core. Jul 2 00:25:28.676269 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:25:29.776577 sshd[5852]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:29.797097 systemd[1]: sshd@19-172.31.19.56:22-147.75.109.163:57456.service: Deactivated successfully. Jul 2 00:25:29.805562 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:25:29.827159 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:25:29.836148 systemd[1]: Started sshd@20-172.31.19.56:22-147.75.109.163:57466.service - OpenSSH per-connection server daemon (147.75.109.163:57466). Jul 2 00:25:29.843352 systemd-logind[1957]: Removed session 20. Jul 2 00:25:30.054487 sshd[5891]: Accepted publickey for core from 147.75.109.163 port 57466 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:30.059130 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:30.074765 systemd-logind[1957]: New session 21 of user core. Jul 2 00:25:30.080482 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:25:30.586083 kubelet[3500]: I0702 00:25:30.585939 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-zx2v5" podStartSLOduration=61.126248559 podStartE2EDuration="1m10.585885664s" podCreationTimestamp="2024-07-02 00:24:20 +0000 UTC" firstStartedPulling="2024-07-02 00:24:52.764694763 +0000 UTC m=+54.297309322" lastFinishedPulling="2024-07-02 00:25:02.224331872 +0000 UTC m=+63.756946427" observedRunningTime="2024-07-02 00:25:02.802948619 +0000 UTC m=+64.335563190" watchObservedRunningTime="2024-07-02 00:25:30.585885664 +0000 UTC m=+92.118500233" Jul 2 00:25:30.590699 kubelet[3500]: I0702 00:25:30.590173 3500 topology_manager.go:215] "Topology Admit Handler" podUID="ad2ef72a-a0e7-46d9-a0c4-a150a8375550" podNamespace="calico-apiserver" podName="calico-apiserver-554c4677fc-lwrg9" Jul 2 00:25:30.630775 systemd[1]: Created slice kubepods-besteffort-podad2ef72a_a0e7_46d9_a0c4_a150a8375550.slice - libcontainer container kubepods-besteffort-podad2ef72a_a0e7_46d9_a0c4_a150a8375550.slice. Jul 2 00:25:30.754960 kubelet[3500]: I0702 00:25:30.754919 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad2ef72a-a0e7-46d9-a0c4-a150a8375550-calico-apiserver-certs\") pod \"calico-apiserver-554c4677fc-lwrg9\" (UID: \"ad2ef72a-a0e7-46d9-a0c4-a150a8375550\") " pod="calico-apiserver/calico-apiserver-554c4677fc-lwrg9" Jul 2 00:25:30.755161 kubelet[3500]: I0702 00:25:30.755017 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrzz\" (UniqueName: \"kubernetes.io/projected/ad2ef72a-a0e7-46d9-a0c4-a150a8375550-kube-api-access-mcrzz\") pod \"calico-apiserver-554c4677fc-lwrg9\" (UID: \"ad2ef72a-a0e7-46d9-a0c4-a150a8375550\") " pod="calico-apiserver/calico-apiserver-554c4677fc-lwrg9" Jul 2 00:25:30.860090 kubelet[3500]: E0702 00:25:30.859552 3500 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:25:30.878337 kubelet[3500]: E0702 00:25:30.877759 3500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad2ef72a-a0e7-46d9-a0c4-a150a8375550-calico-apiserver-certs podName:ad2ef72a-a0e7-46d9-a0c4-a150a8375550 nodeName:}" failed. No retries permitted until 2024-07-02 00:25:31.377726501 +0000 UTC m=+92.910341060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ad2ef72a-a0e7-46d9-a0c4-a150a8375550-calico-apiserver-certs") pod "calico-apiserver-554c4677fc-lwrg9" (UID: "ad2ef72a-a0e7-46d9-a0c4-a150a8375550") : secret "calico-apiserver-certs" not found Jul 2 00:25:31.546122 containerd[1977]: time="2024-07-02T00:25:31.541305461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554c4677fc-lwrg9,Uid:ad2ef72a-a0e7-46d9-a0c4-a150a8375550,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:25:32.196037 systemd-networkd[1872]: cali0bb561c2fe1: Link UP Jul 2 00:25:32.197298 systemd-networkd[1872]: cali0bb561c2fe1: Gained carrier Jul 2 00:25:32.206463 (udev-worker)[5931]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:31.971 [INFO][5914] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0 calico-apiserver-554c4677fc- calico-apiserver ad2ef72a-a0e7-46d9-a0c4-a150a8375550 1016 0 2024-07-02 00:25:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554c4677fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-56 calico-apiserver-554c4677fc-lwrg9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0bb561c2fe1 [] []}} ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:31.971 [INFO][5914] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.061 [INFO][5925] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" HandleID="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Workload="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.092 [INFO][5925] ipam_plugin.go 264: Auto assigning IP ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" HandleID="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Workload="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-56", "pod":"calico-apiserver-554c4677fc-lwrg9", "timestamp":"2024-07-02 00:25:32.061710777 +0000 UTC"}, Hostname:"ip-172-31-19-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.092 [INFO][5925] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.092 [INFO][5925] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.092 [INFO][5925] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-56' Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.097 [INFO][5925] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.110 [INFO][5925] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.141 [INFO][5925] ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.145 [INFO][5925] ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.150 [INFO][5925] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.150 [INFO][5925] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.153 [INFO][5925] ipam.go 1685: Creating new handle: k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.160 [INFO][5925] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.172 [INFO][5925] ipam.go 1216: Successfully claimed IPs: [192.168.72.133/26] block=192.168.72.128/26 handle="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.172 [INFO][5925] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.133/26] handle="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" host="ip-172-31-19-56" Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.173 [INFO][5925] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:32.222860 containerd[1977]: 2024-07-02 00:25:32.174 [INFO][5925] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.72.133/26] IPv6=[] ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" HandleID="k8s-pod-network.632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Workload="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.181 [INFO][5914] k8s.go 386: Populated endpoint ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0", GenerateName:"calico-apiserver-554c4677fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2ef72a-a0e7-46d9-a0c4-a150a8375550", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554c4677fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"", Pod:"calico-apiserver-554c4677fc-lwrg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0bb561c2fe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.181 [INFO][5914] k8s.go 387: Calico CNI using IPs: [192.168.72.133/32] ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.181 [INFO][5914] dataplane_linux.go 68: Setting the host side veth name to cali0bb561c2fe1 ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.197 [INFO][5914] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.199 [INFO][5914] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0", GenerateName:"calico-apiserver-554c4677fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2ef72a-a0e7-46d9-a0c4-a150a8375550", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554c4677fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-56", ContainerID:"632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a", Pod:"calico-apiserver-554c4677fc-lwrg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0bb561c2fe1", MAC:"32:ca:af:4a:8f:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:32.225218 containerd[1977]: 2024-07-02 00:25:32.216 [INFO][5914] k8s.go 500: Wrote updated endpoint to datastore ContainerID="632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a" Namespace="calico-apiserver" Pod="calico-apiserver-554c4677fc-lwrg9" WorkloadEndpoint="ip--172--31--19--56-k8s-calico--apiserver--554c4677fc--lwrg9-eth0" Jul 2 00:25:32.422735 containerd[1977]: time="2024-07-02T00:25:32.422553768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:32.422735 containerd[1977]: time="2024-07-02T00:25:32.422615250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:32.422735 containerd[1977]: time="2024-07-02T00:25:32.422640283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:32.422735 containerd[1977]: time="2024-07-02T00:25:32.422655623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:32.530189 systemd[1]: Started cri-containerd-632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a.scope - libcontainer container 632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a. Jul 2 00:25:33.137532 containerd[1977]: time="2024-07-02T00:25:33.137465464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554c4677fc-lwrg9,Uid:ad2ef72a-a0e7-46d9-a0c4-a150a8375550,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a\"" Jul 2 00:25:33.144190 containerd[1977]: time="2024-07-02T00:25:33.144142625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:25:33.429274 systemd-networkd[1872]: cali0bb561c2fe1: Gained IPv6LL Jul 2 00:25:34.333212 sshd[5891]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:34.348356 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:25:34.350459 systemd[1]: sshd@20-172.31.19.56:22-147.75.109.163:57466.service: Deactivated successfully. Jul 2 00:25:34.357003 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:25:34.389216 systemd[1]: Started sshd@21-172.31.19.56:22-147.75.109.163:45226.service - OpenSSH per-connection server daemon (147.75.109.163:45226). Jul 2 00:25:34.396010 systemd-logind[1957]: Removed session 21. Jul 2 00:25:34.718092 sshd[5992]: Accepted publickey for core from 147.75.109.163 port 45226 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:34.719262 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:34.741159 systemd-logind[1957]: New session 22 of user core. Jul 2 00:25:34.750487 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:25:35.953162 ntpd[1951]: Listen normally on 14 cali0bb561c2fe1 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:25:35.954948 ntpd[1951]: 2 Jul 00:25:35 ntpd[1951]: Listen normally on 14 cali0bb561c2fe1 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:25:36.624591 sshd[5992]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:36.634983 systemd[1]: sshd@21-172.31.19.56:22-147.75.109.163:45226.service: Deactivated successfully. Jul 2 00:25:36.645539 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:25:36.652156 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:25:36.676599 systemd[1]: Started sshd@22-172.31.19.56:22-147.75.109.163:45240.service - OpenSSH per-connection server daemon (147.75.109.163:45240). Jul 2 00:25:36.682176 systemd-logind[1957]: Removed session 22. Jul 2 00:25:36.940120 sshd[6010]: Accepted publickey for core from 147.75.109.163 port 45240 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:36.945268 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:36.986563 systemd[1]: run-containerd-runc-k8s.io-dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527-runc.CYF05J.mount: Deactivated successfully. Jul 2 00:25:37.004137 systemd-logind[1957]: New session 23 of user core. Jul 2 00:25:37.016763 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:25:37.506841 sshd[6010]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:37.515528 systemd[1]: sshd@22-172.31.19.56:22-147.75.109.163:45240.service: Deactivated successfully. Jul 2 00:25:37.524019 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:25:37.526069 systemd-logind[1957]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:25:37.528376 systemd-logind[1957]: Removed session 23. Jul 2 00:25:37.987511 containerd[1977]: time="2024-07-02T00:25:37.987459411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.027084 containerd[1977]: time="2024-07-02T00:25:37.998991031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:25:38.047033 containerd[1977]: time="2024-07-02T00:25:38.046962961Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.055065 containerd[1977]: time="2024-07-02T00:25:38.053000096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.066112 containerd[1977]: time="2024-07-02T00:25:38.065123389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.910909275s" Jul 2 00:25:38.066112 containerd[1977]: time="2024-07-02T00:25:38.065212790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:25:38.092380 containerd[1977]: time="2024-07-02T00:25:38.092340393Z" level=info msg="CreateContainer within sandbox \"632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:25:38.160012 containerd[1977]: time="2024-07-02T00:25:38.158377342Z" level=info msg="CreateContainer within sandbox \"632423b0f69837a9a1b873e52ef1ff8e110606229b2fa9ff31ac4c9bfcadf99a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c797bffd23d97530867b7ec7a1ad9809bc0ddf24c856d9c2e387deebe9f57196\"" Jul 2 00:25:38.163198 containerd[1977]: time="2024-07-02T00:25:38.160931657Z" level=info msg="StartContainer for \"c797bffd23d97530867b7ec7a1ad9809bc0ddf24c856d9c2e387deebe9f57196\"" Jul 2 00:25:38.393335 systemd[1]: Started cri-containerd-c797bffd23d97530867b7ec7a1ad9809bc0ddf24c856d9c2e387deebe9f57196.scope - libcontainer container c797bffd23d97530867b7ec7a1ad9809bc0ddf24c856d9c2e387deebe9f57196. Jul 2 00:25:38.462655 containerd[1977]: time="2024-07-02T00:25:38.462563597Z" level=info msg="StartContainer for \"c797bffd23d97530867b7ec7a1ad9809bc0ddf24c856d9c2e387deebe9f57196\" returns successfully" Jul 2 00:25:39.549086 kubelet[3500]: I0702 00:25:39.547482 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554c4677fc-lwrg9" podStartSLOduration=4.495250713 podStartE2EDuration="9.418173474s" podCreationTimestamp="2024-07-02 00:25:30 +0000 UTC" firstStartedPulling="2024-07-02 00:25:33.143691617 +0000 UTC m=+94.676306178" lastFinishedPulling="2024-07-02 00:25:38.066614389 +0000 UTC m=+99.599228939" observedRunningTime="2024-07-02 00:25:39.006469514 +0000 UTC m=+100.539084078" watchObservedRunningTime="2024-07-02 00:25:39.418173474 +0000 UTC m=+100.950788045" Jul 2 00:25:42.546416 systemd[1]: Started sshd@23-172.31.19.56:22-147.75.109.163:51792.service - OpenSSH per-connection server daemon (147.75.109.163:51792). Jul 2 00:25:42.845782 sshd[6098]: Accepted publickey for core from 147.75.109.163 port 51792 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:42.848812 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:42.857294 systemd-logind[1957]: New session 24 of user core. Jul 2 00:25:42.861436 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:25:43.344749 sshd[6098]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:43.350380 systemd[1]: sshd@23-172.31.19.56:22-147.75.109.163:51792.service: Deactivated successfully. Jul 2 00:25:43.356429 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:25:43.358560 systemd-logind[1957]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:25:43.362368 systemd-logind[1957]: Removed session 24. Jul 2 00:25:48.384411 systemd[1]: Started sshd@24-172.31.19.56:22-147.75.109.163:51808.service - OpenSSH per-connection server daemon (147.75.109.163:51808). Jul 2 00:25:48.575096 sshd[6120]: Accepted publickey for core from 147.75.109.163 port 51808 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:48.576435 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:48.612753 systemd-logind[1957]: New session 25 of user core. Jul 2 00:25:48.620299 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:25:48.888409 sshd[6120]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:48.894763 systemd[1]: sshd@24-172.31.19.56:22-147.75.109.163:51808.service: Deactivated successfully. Jul 2 00:25:48.898283 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:25:48.899108 systemd-logind[1957]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:25:48.900939 systemd-logind[1957]: Removed session 25. Jul 2 00:25:53.926802 systemd[1]: Started sshd@25-172.31.19.56:22-147.75.109.163:40714.service - OpenSSH per-connection server daemon (147.75.109.163:40714). Jul 2 00:25:54.097722 sshd[6140]: Accepted publickey for core from 147.75.109.163 port 40714 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:54.100401 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:54.107985 systemd-logind[1957]: New session 26 of user core. Jul 2 00:25:54.114298 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:25:54.410023 sshd[6140]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:54.414098 systemd[1]: sshd@25-172.31.19.56:22-147.75.109.163:40714.service: Deactivated successfully. Jul 2 00:25:54.418222 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:25:54.420553 systemd-logind[1957]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:25:54.422555 systemd-logind[1957]: Removed session 26. Jul 2 00:25:59.451970 systemd[1]: Started sshd@26-172.31.19.56:22-147.75.109.163:40720.service - OpenSSH per-connection server daemon (147.75.109.163:40720). Jul 2 00:25:59.646683 sshd[6179]: Accepted publickey for core from 147.75.109.163 port 40720 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:59.652202 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:59.659628 systemd-logind[1957]: New session 27 of user core. Jul 2 00:25:59.664241 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:25:59.909958 sshd[6179]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:59.929003 systemd-logind[1957]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:25:59.930027 systemd[1]: sshd@26-172.31.19.56:22-147.75.109.163:40720.service: Deactivated successfully. Jul 2 00:25:59.934696 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:25:59.936780 systemd-logind[1957]: Removed session 27. Jul 2 00:26:04.953549 systemd[1]: Started sshd@27-172.31.19.56:22-147.75.109.163:51882.service - OpenSSH per-connection server daemon (147.75.109.163:51882). Jul 2 00:26:05.192168 sshd[6218]: Accepted publickey for core from 147.75.109.163 port 51882 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:05.193574 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:05.203967 systemd-logind[1957]: New session 28 of user core. Jul 2 00:26:05.209411 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:26:05.493002 sshd[6218]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:05.498720 systemd[1]: sshd@27-172.31.19.56:22-147.75.109.163:51882.service: Deactivated successfully. Jul 2 00:26:05.502105 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:26:05.503797 systemd-logind[1957]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:26:05.505097 systemd-logind[1957]: Removed session 28. Jul 2 00:26:10.533088 systemd[1]: Started sshd@28-172.31.19.56:22-147.75.109.163:51884.service - OpenSSH per-connection server daemon (147.75.109.163:51884). Jul 2 00:26:10.759033 sshd[6256]: Accepted publickey for core from 147.75.109.163 port 51884 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:10.775526 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:10.783205 systemd-logind[1957]: New session 29 of user core. Jul 2 00:26:10.792557 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:26:11.111098 sshd[6256]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:11.122616 systemd[1]: sshd@28-172.31.19.56:22-147.75.109.163:51884.service: Deactivated successfully. Jul 2 00:26:11.126198 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:26:11.128209 systemd-logind[1957]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:26:11.139072 systemd-logind[1957]: Removed session 29. Jul 2 00:26:16.152642 systemd[1]: Started sshd@29-172.31.19.56:22-147.75.109.163:44300.service - OpenSSH per-connection server daemon (147.75.109.163:44300). Jul 2 00:26:16.340399 sshd[6281]: Accepted publickey for core from 147.75.109.163 port 44300 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:16.342925 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:16.352364 systemd-logind[1957]: New session 30 of user core. Jul 2 00:26:16.357591 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:26:16.706656 sshd[6281]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:16.712466 systemd[1]: sshd@29-172.31.19.56:22-147.75.109.163:44300.service: Deactivated successfully. Jul 2 00:26:16.715036 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:26:16.716278 systemd-logind[1957]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:26:16.717475 systemd-logind[1957]: Removed session 30. Jul 2 00:26:28.446449 systemd[1]: run-containerd-runc-k8s.io-15e53614f27cc21eaf808215958ad9bfa0d6fa41b4c5afd7f3c7eb2421aa87a9-runc.SXNCuu.mount: Deactivated successfully. Jul 2 00:26:30.960394 systemd[1]: cri-containerd-e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b.scope: Deactivated successfully. Jul 2 00:26:30.960877 systemd[1]: cri-containerd-e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b.scope: Consumed 6.582s CPU time. Jul 2 00:26:31.086938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b-rootfs.mount: Deactivated successfully. Jul 2 00:26:31.088798 containerd[1977]: time="2024-07-02T00:26:31.080909384Z" level=info msg="shim disconnected" id=e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b namespace=k8s.io Jul 2 00:26:31.089319 containerd[1977]: time="2024-07-02T00:26:31.089274585Z" level=warning msg="cleaning up after shim disconnected" id=e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b namespace=k8s.io Jul 2 00:26:31.089422 containerd[1977]: time="2024-07-02T00:26:31.089404254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:31.272651 kubelet[3500]: I0702 00:26:31.272589 3500 scope.go:117] "RemoveContainer" containerID="e759578016a234a14adcffd1a237cd9085f8776ec6c41a51690d5dd6517f371b" Jul 2 00:26:31.300987 containerd[1977]: time="2024-07-02T00:26:31.300946249Z" level=info msg="CreateContainer within sandbox \"f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 00:26:31.327299 containerd[1977]: time="2024-07-02T00:26:31.327235147Z" level=info msg="CreateContainer within sandbox \"f06b77c6f913783483b0a9741e9f4f8f29e95fdaf85ca095fe5df6865cfe389d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3e8519cfa5e5dfaba66dc200edb34dbc2a8108e1ae3094b9ed6f8b9ba7a4fb06\"" Jul 2 00:26:31.327871 containerd[1977]: time="2024-07-02T00:26:31.327750595Z" level=info msg="StartContainer for \"3e8519cfa5e5dfaba66dc200edb34dbc2a8108e1ae3094b9ed6f8b9ba7a4fb06\"" Jul 2 00:26:31.373312 systemd[1]: Started cri-containerd-3e8519cfa5e5dfaba66dc200edb34dbc2a8108e1ae3094b9ed6f8b9ba7a4fb06.scope - libcontainer container 3e8519cfa5e5dfaba66dc200edb34dbc2a8108e1ae3094b9ed6f8b9ba7a4fb06. Jul 2 00:26:31.412424 containerd[1977]: time="2024-07-02T00:26:31.411889581Z" level=info msg="StartContainer for \"3e8519cfa5e5dfaba66dc200edb34dbc2a8108e1ae3094b9ed6f8b9ba7a4fb06\" returns successfully" Jul 2 00:26:31.897823 systemd[1]: cri-containerd-77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d.scope: Deactivated successfully. Jul 2 00:26:31.899869 systemd[1]: cri-containerd-77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d.scope: Consumed 3.189s CPU time, 20.7M memory peak, 0B memory swap peak. Jul 2 00:26:31.947560 containerd[1977]: time="2024-07-02T00:26:31.938013280Z" level=info msg="shim disconnected" id=77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d namespace=k8s.io Jul 2 00:26:31.947560 containerd[1977]: time="2024-07-02T00:26:31.947356691Z" level=warning msg="cleaning up after shim disconnected" id=77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d namespace=k8s.io Jul 2 00:26:31.947560 containerd[1977]: time="2024-07-02T00:26:31.947375093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:32.082803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d-rootfs.mount: Deactivated successfully. Jul 2 00:26:32.268525 kubelet[3500]: I0702 00:26:32.268485 3500 scope.go:117] "RemoveContainer" containerID="77ce3c0dc562e421f00d419c37503db6f3135b1eebd48118a61b869554a7349d" Jul 2 00:26:32.274190 containerd[1977]: time="2024-07-02T00:26:32.274023823Z" level=info msg="CreateContainer within sandbox \"561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:26:32.320674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425379530.mount: Deactivated successfully. Jul 2 00:26:32.323997 containerd[1977]: time="2024-07-02T00:26:32.323719249Z" level=info msg="CreateContainer within sandbox \"561861da65219675c4aeb18117ecc22e24c7eec5e72613825028c031dc032887\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a7fd952fe351352f2faa6255d4b0fc8fa0cdf82face313481229201fc7076b4b\"" Jul 2 00:26:32.331943 containerd[1977]: time="2024-07-02T00:26:32.329608679Z" level=info msg="StartContainer for \"a7fd952fe351352f2faa6255d4b0fc8fa0cdf82face313481229201fc7076b4b\"" Jul 2 00:26:32.332114 kubelet[3500]: E0702 00:26:32.330720 3500 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Jul 2 00:26:32.375659 kubelet[3500]: E0702 00:26:32.375078 3500 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Jul 2 00:26:32.403361 systemd[1]: Started cri-containerd-a7fd952fe351352f2faa6255d4b0fc8fa0cdf82face313481229201fc7076b4b.scope - libcontainer container a7fd952fe351352f2faa6255d4b0fc8fa0cdf82face313481229201fc7076b4b. Jul 2 00:26:32.495355 containerd[1977]: time="2024-07-02T00:26:32.495305710Z" level=info msg="StartContainer for \"a7fd952fe351352f2faa6255d4b0fc8fa0cdf82face313481229201fc7076b4b\" returns successfully" Jul 2 00:26:36.779669 systemd[1]: run-containerd-runc-k8s.io-dd1f5806e69ad318f120676a7a444e537453af6c0d60368e0eeae2003f8d8527-runc.pr5Z2I.mount: Deactivated successfully. Jul 2 00:26:36.847321 systemd[1]: cri-containerd-adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6.scope: Deactivated successfully. Jul 2 00:26:36.847631 systemd[1]: cri-containerd-adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6.scope: Consumed 1.590s CPU time, 15.5M memory peak, 0B memory swap peak. Jul 2 00:26:36.948134 containerd[1977]: time="2024-07-02T00:26:36.947960480Z" level=info msg="shim disconnected" id=adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6 namespace=k8s.io Jul 2 00:26:36.948134 containerd[1977]: time="2024-07-02T00:26:36.948032160Z" level=warning msg="cleaning up after shim disconnected" id=adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6 namespace=k8s.io Jul 2 00:26:36.951185 containerd[1977]: time="2024-07-02T00:26:36.951127697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:36.952182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6-rootfs.mount: Deactivated successfully. Jul 2 00:26:37.309592 kubelet[3500]: I0702 00:26:37.309562 3500 scope.go:117] "RemoveContainer" containerID="adee8047a1e720a5895446fdb6bf09ed82f3850ada4b2100b0e1e1761a844ea6" Jul 2 00:26:37.312797 containerd[1977]: time="2024-07-02T00:26:37.312740568Z" level=info msg="CreateContainer within sandbox \"d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:26:37.339336 containerd[1977]: time="2024-07-02T00:26:37.339289832Z" level=info msg="CreateContainer within sandbox \"d4b456e17e301b7ba2d5c4ea11864b9b7a078497f74281f64896d217c2b24fc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"52ddb6397e5170f98ad837dac2e90a55bc1a52ab7bdb44ce98b28ba0045ed07f\"" Jul 2 00:26:37.339969 containerd[1977]: time="2024-07-02T00:26:37.339934829Z" level=info msg="StartContainer for \"52ddb6397e5170f98ad837dac2e90a55bc1a52ab7bdb44ce98b28ba0045ed07f\"" Jul 2 00:26:37.381438 systemd[1]: Started cri-containerd-52ddb6397e5170f98ad837dac2e90a55bc1a52ab7bdb44ce98b28ba0045ed07f.scope - libcontainer container 52ddb6397e5170f98ad837dac2e90a55bc1a52ab7bdb44ce98b28ba0045ed07f. Jul 2 00:26:37.471766 containerd[1977]: time="2024-07-02T00:26:37.471720119Z" level=info msg="StartContainer for \"52ddb6397e5170f98ad837dac2e90a55bc1a52ab7bdb44ce98b28ba0045ed07f\" returns successfully"