Jun 25 18:36:10.184522 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:36:10.184603 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:10.184619 kernel: BIOS-provided physical RAM map: Jun 25 18:36:10.184658 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:36:10.184670 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:36:10.184748 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:36:10.184767 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 18:36:10.184830 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 18:36:10.184843 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 18:36:10.184855 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:36:10.185192 kernel: NX (Execute Disable) protection: active Jun 25 18:36:10.185209 kernel: APIC: Static calls initialized Jun 25 18:36:10.185222 kernel: SMBIOS 2.7 present. Jun 25 18:36:10.185235 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 18:36:10.185256 kernel: Hypervisor detected: KVM Jun 25 18:36:10.185270 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:36:10.185284 kernel: kvm-clock: using sched offset of 7968210157 cycles Jun 25 18:36:10.185300 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:36:10.185314 kernel: tsc: Detected 2499.992 MHz processor Jun 25 18:36:10.185328 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:36:10.185343 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:36:10.185361 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 18:36:10.185376 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:36:10.185389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:36:10.185404 kernel: Using GB pages for direct mapping Jun 25 18:36:10.185418 kernel: ACPI: Early table checksum verification disabled Jun 25 18:36:10.185432 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 18:36:10.185446 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 18:36:10.185461 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 18:36:10.185475 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 18:36:10.185493 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 18:36:10.185507 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:36:10.185522 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 18:36:10.185536 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 18:36:10.185550 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 18:36:10.185565 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 18:36:10.185579 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 18:36:10.185593 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:36:10.185707 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 18:36:10.185722 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 18:36:10.185742 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 18:36:10.185757 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 18:36:10.185772 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 18:36:10.185787 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 18:36:10.185805 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 18:36:10.185820 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 18:36:10.185835 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 18:36:10.185850 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 18:36:10.185866 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:36:10.185881 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:36:10.185896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 18:36:10.185910 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 18:36:10.185925 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 18:36:10.185951 kernel: Zone ranges: Jun 25 18:36:10.185966 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:36:10.185994 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 18:36:10.186515 kernel: Normal empty Jun 25 18:36:10.186533 kernel: Movable zone start for each node Jun 25 18:36:10.186549 kernel: Early memory node ranges Jun 25 18:36:10.186564 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:36:10.186579 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 18:36:10.186594 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 18:36:10.186614 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:36:10.186629 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:36:10.186644 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 18:36:10.186659 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 18:36:10.186674 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:36:10.186689 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 18:36:10.186704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:36:10.186719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:36:10.186734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:36:10.186752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:36:10.186767 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:36:10.186782 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:36:10.186797 kernel: TSC deadline timer available Jun 25 18:36:10.186848 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:36:10.186861 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:36:10.186877 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 18:36:10.186892 kernel: Booting paravirtualized kernel on KVM Jun 25 18:36:10.186907 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:36:10.186923 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:36:10.186942 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:36:10.186958 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:36:10.186972 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:36:10.189032 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:36:10.189054 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:36:10.189073 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:10.189089 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:36:10.189109 kernel: random: crng init done Jun 25 18:36:10.189124 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:36:10.189139 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:36:10.189154 kernel: Fallback order for Node 0: 0 Jun 25 18:36:10.189169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 18:36:10.189184 kernel: Policy zone: DMA32 Jun 25 18:36:10.189199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:36:10.189214 kernel: Memory: 1926200K/2057760K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131300K reserved, 0K cma-reserved) Jun 25 18:36:10.189230 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:36:10.189248 kernel: Kernel/User page tables isolation: enabled Jun 25 18:36:10.189263 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:36:10.189278 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:36:10.189294 kernel: Dynamic Preempt: voluntary Jun 25 18:36:10.189310 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:36:10.189325 kernel: rcu: RCU event tracing is enabled. Jun 25 18:36:10.189341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:36:10.189356 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:36:10.189371 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:36:10.189386 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:36:10.189404 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:36:10.189419 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:36:10.189434 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 18:36:10.189449 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:36:10.189464 kernel: Console: colour VGA+ 80x25 Jun 25 18:36:10.189479 kernel: printk: console [ttyS0] enabled Jun 25 18:36:10.189494 kernel: ACPI: Core revision 20230628 Jun 25 18:36:10.189509 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 18:36:10.189524 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:36:10.189543 kernel: x2apic enabled Jun 25 18:36:10.189558 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:36:10.189585 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Jun 25 18:36:10.189604 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499992) Jun 25 18:36:10.189620 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:36:10.189636 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:36:10.189651 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:36:10.189667 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:36:10.189682 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:36:10.189698 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:36:10.189714 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:36:10.189730 kernel: RETBleed: Vulnerable Jun 25 18:36:10.189749 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:36:10.189765 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:36:10.189780 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:36:10.189796 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:36:10.189811 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:36:10.189827 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:36:10.189846 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:36:10.189862 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 18:36:10.189878 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 18:36:10.189893 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:36:10.189909 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:36:10.189925 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:36:10.189949 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 18:36:10.189965 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:36:10.189996 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 18:36:10.190013 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 18:36:10.190028 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 18:36:10.190047 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 18:36:10.190063 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 18:36:10.190079 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 18:36:10.190094 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 18:36:10.190110 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:36:10.190125 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:36:10.190141 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:36:10.190157 kernel: SELinux: Initializing. Jun 25 18:36:10.190172 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:36:10.190188 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:36:10.190204 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:36:10.190220 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:10.190239 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:10.190255 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:10.190272 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:36:10.190288 kernel: signal: max sigframe size: 3632 Jun 25 18:36:10.190304 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:36:10.190320 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:36:10.190337 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:36:10.190353 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:36:10.190369 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:36:10.190447 kernel: .... node #0, CPUs: #1 Jun 25 18:36:10.190467 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 18:36:10.190484 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:36:10.190500 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:36:10.190517 kernel: smpboot: Max logical packages: 1 Jun 25 18:36:10.190533 kernel: smpboot: Total of 2 processors activated (9999.96 BogoMIPS) Jun 25 18:36:10.190549 kernel: devtmpfs: initialized Jun 25 18:36:10.190565 kernel: x86/mm: Memory block size: 128MB Jun 25 18:36:10.190585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:36:10.190602 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:36:10.190618 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:36:10.190634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:36:10.190650 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:36:10.190666 kernel: audit: type=2000 audit(1719340569.351:1): state=initialized audit_enabled=0 res=1 Jun 25 18:36:10.190682 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:36:10.190698 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:36:10.190714 kernel: cpuidle: using governor menu Jun 25 18:36:10.190733 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:36:10.190749 kernel: dca service started, version 1.12.1 Jun 25 18:36:10.190870 kernel: PCI: Using configuration type 1 for base access Jun 25 18:36:10.190888 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:36:10.190904 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:36:10.190921 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:36:10.190937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:36:10.190952 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:36:10.190969 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:36:10.195032 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:36:10.195059 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:36:10.195076 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:36:10.195094 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 18:36:10.195111 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:36:10.195127 kernel: ACPI: Interpreter enabled Jun 25 18:36:10.195144 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:36:10.195160 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:36:10.195177 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:36:10.195199 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:36:10.195215 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 18:36:10.195232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:36:10.195471 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:36:10.195683 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 18:36:10.195832 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 18:36:10.195917 kernel: acpiphp: Slot [3] registered Jun 25 18:36:10.195935 kernel: acpiphp: Slot [4] registered Jun 25 18:36:10.195956 kernel: acpiphp: Slot [5] registered Jun 25 18:36:10.195973 kernel: acpiphp: Slot [6] registered Jun 25 18:36:10.196003 kernel: acpiphp: Slot [7] registered Jun 25 18:36:10.196018 kernel: acpiphp: Slot [8] registered Jun 25 18:36:10.196035 kernel: acpiphp: Slot [9] registered Jun 25 18:36:10.196052 kernel: acpiphp: Slot [10] registered Jun 25 18:36:10.196115 kernel: acpiphp: Slot [11] registered Jun 25 18:36:10.196132 kernel: acpiphp: Slot [12] registered Jun 25 18:36:10.196148 kernel: acpiphp: Slot [13] registered Jun 25 18:36:10.196169 kernel: acpiphp: Slot [14] registered Jun 25 18:36:10.196236 kernel: acpiphp: Slot [15] registered Jun 25 18:36:10.196254 kernel: acpiphp: Slot [16] registered Jun 25 18:36:10.196271 kernel: acpiphp: Slot [17] registered Jun 25 18:36:10.196288 kernel: acpiphp: Slot [18] registered Jun 25 18:36:10.196304 kernel: acpiphp: Slot [19] registered Jun 25 18:36:10.196321 kernel: acpiphp: Slot [20] registered Jun 25 18:36:10.196337 kernel: acpiphp: Slot [21] registered Jun 25 18:36:10.196354 kernel: acpiphp: Slot [22] registered Jun 25 18:36:10.196374 kernel: acpiphp: Slot [23] registered Jun 25 18:36:10.196390 kernel: acpiphp: Slot [24] registered Jun 25 18:36:10.196406 kernel: acpiphp: Slot [25] registered Jun 25 18:36:10.196422 kernel: acpiphp: Slot [26] registered Jun 25 18:36:10.196438 kernel: acpiphp: Slot [27] registered Jun 25 18:36:10.196455 kernel: acpiphp: Slot [28] registered Jun 25 18:36:10.196471 kernel: acpiphp: Slot [29] registered Jun 25 18:36:10.196487 kernel: acpiphp: Slot [30] registered Jun 25 18:36:10.196504 kernel: acpiphp: Slot [31] registered Jun 25 18:36:10.196520 kernel: PCI host bridge to bus 0000:00 Jun 25 18:36:10.196695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:36:10.196824 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:36:10.196947 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:36:10.202252 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 18:36:10.202473 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:36:10.202647 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:36:10.202859 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:36:10.203124 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 18:36:10.203268 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 18:36:10.203406 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 18:36:10.203554 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 18:36:10.203692 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 18:36:10.203893 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 18:36:10.208403 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 18:36:10.208575 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 18:36:10.208715 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 18:36:10.208861 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 18:36:10.209025 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 18:36:10.209163 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 18:36:10.209298 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:36:10.209522 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 18:36:10.209664 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 18:36:10.209808 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 18:36:10.214072 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 18:36:10.214115 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:36:10.214132 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:36:10.214148 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:36:10.214171 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:36:10.214186 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:36:10.214202 kernel: iommu: Default domain type: Translated Jun 25 18:36:10.214217 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:36:10.214231 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:36:10.214246 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:36:10.214263 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:36:10.214279 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 18:36:10.214533 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 18:36:10.214686 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 18:36:10.214884 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:36:10.214907 kernel: vgaarb: loaded Jun 25 18:36:10.214923 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 18:36:10.214939 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 18:36:10.214955 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:36:10.214971 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:36:10.215041 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:36:10.215063 kernel: pnp: PnP ACPI init Jun 25 18:36:10.215079 kernel: pnp: PnP ACPI: found 5 devices Jun 25 18:36:10.215095 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:36:10.215111 kernel: NET: Registered PF_INET protocol family Jun 25 18:36:10.215127 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:36:10.215143 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 18:36:10.215159 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:36:10.215175 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:36:10.215191 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 18:36:10.215273 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 18:36:10.215293 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:36:10.215309 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:36:10.215325 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:36:10.215341 kernel: NET: Registered PF_XDP protocol family Jun 25 18:36:10.215492 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:36:10.215696 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:36:10.215829 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:36:10.215949 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 18:36:10.218924 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:36:10.218960 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:36:10.218977 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:36:10.220025 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Jun 25 18:36:10.220044 kernel: clocksource: Switched to clocksource tsc Jun 25 18:36:10.220061 kernel: Initialise system trusted keyrings Jun 25 18:36:10.220078 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 18:36:10.220094 kernel: Key type asymmetric registered Jun 25 18:36:10.220120 kernel: Asymmetric key parser 'x509' registered Jun 25 18:36:10.220136 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:36:10.220152 kernel: io scheduler mq-deadline registered Jun 25 18:36:10.220169 kernel: io scheduler kyber registered Jun 25 18:36:10.220185 kernel: io scheduler bfq registered Jun 25 18:36:10.220202 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:36:10.220218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:36:10.220235 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:36:10.220252 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:36:10.220271 kernel: i8042: Warning: Keylock active Jun 25 18:36:10.220288 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:36:10.220304 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:36:10.220577 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 18:36:10.220708 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 18:36:10.220831 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T18:36:09 UTC (1719340569) Jun 25 18:36:10.220952 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 18:36:10.220972 kernel: intel_pstate: CPU model not supported Jun 25 18:36:10.221008 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:36:10.221025 kernel: Segment Routing with IPv6 Jun 25 18:36:10.221041 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:36:10.221058 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:36:10.221075 kernel: Key type dns_resolver registered Jun 25 18:36:10.221091 kernel: IPI shorthand broadcast: enabled Jun 25 18:36:10.221108 kernel: sched_clock: Marking stable (716022081, 294511793)->(1104494909, -93961035) Jun 25 18:36:10.221124 kernel: registered taskstats version 1 Jun 25 18:36:10.221395 kernel: Loading compiled-in X.509 certificates Jun 25 18:36:10.221488 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:36:10.221507 kernel: Key type .fscrypt registered Jun 25 18:36:10.221523 kernel: Key type fscrypt-provisioning registered Jun 25 18:36:10.221540 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:36:10.221557 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:36:10.221573 kernel: ima: No architecture policies found Jun 25 18:36:10.221590 kernel: clk: Disabling unused clocks Jun 25 18:36:10.221606 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:36:10.221627 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:36:10.221643 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:36:10.221659 kernel: Run /init as init process Jun 25 18:36:10.221676 kernel: with arguments: Jun 25 18:36:10.221692 kernel: /init Jun 25 18:36:10.221708 kernel: with environment: Jun 25 18:36:10.221723 kernel: HOME=/ Jun 25 18:36:10.221739 kernel: TERM=linux Jun 25 18:36:10.221754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:36:10.221775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:36:10.221798 systemd[1]: Detected virtualization amazon. Jun 25 18:36:10.221834 systemd[1]: Detected architecture x86-64. Jun 25 18:36:10.221851 systemd[1]: Running in initrd. Jun 25 18:36:10.221869 systemd[1]: No hostname configured, using default hostname. Jun 25 18:36:10.221946 systemd[1]: Hostname set to . Jun 25 18:36:10.221967 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:36:10.224021 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:36:10.224050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:10.224069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:10.224089 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:36:10.224108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:36:10.224126 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:36:10.224152 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:36:10.224173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:36:10.224191 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:36:10.224210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:10.224228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:10.224247 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:36:10.224265 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:36:10.224286 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:36:10.224303 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:36:10.224322 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:36:10.224340 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:36:10.224358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:36:10.224376 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:36:10.224394 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:36:10.224413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:10.224431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:10.224452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:10.224470 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:36:10.224488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:36:10.224507 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:36:10.224525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:36:10.224543 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:36:10.224562 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:36:10.224583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:36:10.224601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:10.224619 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:36:10.224637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:10.224655 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:36:10.224678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:36:10.224743 systemd-journald[178]: Collecting audit messages is disabled. Jun 25 18:36:10.224787 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:36:10.224806 systemd-journald[178]: Journal started Jun 25 18:36:10.224844 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2c9c38a621b10ebd6ff4093ada2c58) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:36:10.177090 systemd-modules-load[179]: Inserted module 'overlay' Jun 25 18:36:10.386964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:36:10.388154 kernel: Bridge firewalling registered Jun 25 18:36:10.388180 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:36:10.235328 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 25 18:36:10.384867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:10.388410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:10.410254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:10.413535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:10.416647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:36:10.428887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:36:10.457030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:10.467570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:10.468615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:10.476468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:36:10.493883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:10.513261 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:36:10.532569 dracut-cmdline[215]: dracut-dracut-053 Jun 25 18:36:10.536917 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:10.560249 systemd-resolved[208]: Positive Trust Anchors: Jun 25 18:36:10.560270 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:36:10.560331 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:36:10.578374 systemd-resolved[208]: Defaulting to hostname 'linux'. Jun 25 18:36:10.581074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:36:10.584379 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:10.651013 kernel: SCSI subsystem initialized Jun 25 18:36:10.671587 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:36:10.693013 kernel: iscsi: registered transport (tcp) Jun 25 18:36:10.727009 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:36:10.727149 kernel: QLogic iSCSI HBA Driver Jun 25 18:36:10.771624 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:36:10.781168 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:36:10.811485 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:36:10.811565 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:36:10.811586 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:36:10.871041 kernel: raid6: avx512x4 gen() 8803 MB/s Jun 25 18:36:10.888226 kernel: raid6: avx512x2 gen() 11884 MB/s Jun 25 18:36:10.905428 kernel: raid6: avx512x1 gen() 14181 MB/s Jun 25 18:36:10.922037 kernel: raid6: avx2x4 gen() 14173 MB/s Jun 25 18:36:10.939036 kernel: raid6: avx2x2 gen() 14740 MB/s Jun 25 18:36:10.956358 kernel: raid6: avx2x1 gen() 9864 MB/s Jun 25 18:36:10.956434 kernel: raid6: using algorithm avx2x2 gen() 14740 MB/s Jun 25 18:36:10.974238 kernel: raid6: .... xor() 14312 MB/s, rmw enabled Jun 25 18:36:10.974394 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:36:11.010015 kernel: xor: automatically using best checksumming function avx Jun 25 18:36:11.378010 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:36:11.406799 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:36:11.417383 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:11.474234 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jun 25 18:36:11.485671 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:11.503794 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:36:11.538696 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jun 25 18:36:11.583316 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:36:11.596238 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:36:11.672211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:11.691310 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:36:11.722076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:36:11.726682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:36:11.734790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:11.744726 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:36:11.761191 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:36:11.802891 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:36:11.823413 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:36:11.856593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:36:11.856775 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:11.867694 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 18:36:11.921237 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 18:36:11.921439 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:36:11.921463 kernel: AES CTR mode by8 optimization enabled Jun 25 18:36:11.921485 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 18:36:11.924055 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 18:36:11.924224 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 18:36:11.924240 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:f0:03:c2:7e:d1 Jun 25 18:36:11.859063 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:11.861478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:36:11.861708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:11.864411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:11.878745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:11.940049 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 18:36:11.947317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:36:11.947380 kernel: GPT:9289727 != 16777215 Jun 25 18:36:11.947393 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:36:11.947405 kernel: GPT:9289727 != 16777215 Jun 25 18:36:11.947416 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:36:11.947427 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:11.960951 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:12.091670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:12.111227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:12.147014 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (451) Jun 25 18:36:12.155495 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (447) Jun 25 18:36:12.156949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:12.240628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 18:36:12.276708 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 18:36:12.287965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:36:12.295527 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 18:36:12.295664 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 18:36:12.315286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:36:12.328836 disk-uuid[631]: Primary Header is updated. Jun 25 18:36:12.328836 disk-uuid[631]: Secondary Entries is updated. Jun 25 18:36:12.328836 disk-uuid[631]: Secondary Header is updated. Jun 25 18:36:12.338014 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:12.345736 kernel: GPT:disk_guids don't match. Jun 25 18:36:12.345802 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:36:12.345822 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:12.358018 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:13.356182 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:13.358418 disk-uuid[632]: The operation has completed successfully. Jun 25 18:36:13.529972 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:36:13.530366 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:36:13.597770 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:36:13.609976 sh[975]: Success Jun 25 18:36:13.670652 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:36:13.868115 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:36:13.882388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:36:13.897685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:36:13.959023 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:36:13.959106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:13.959139 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:36:13.961369 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:36:13.963421 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:36:14.096038 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 18:36:14.146876 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:36:14.148075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:36:14.163828 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:36:14.173780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:36:14.236150 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:14.236227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:14.236247 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:14.245847 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:14.262040 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:14.262760 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:36:14.283766 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:36:14.292438 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:36:14.357091 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:36:14.367223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:36:14.430892 systemd-networkd[1167]: lo: Link UP Jun 25 18:36:14.430903 systemd-networkd[1167]: lo: Gained carrier Jun 25 18:36:14.433137 systemd-networkd[1167]: Enumeration completed Jun 25 18:36:14.433638 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:14.433643 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:36:14.435385 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:36:14.441668 systemd[1]: Reached target network.target - Network. Jun 25 18:36:14.445970 systemd-networkd[1167]: eth0: Link UP Jun 25 18:36:14.445976 systemd-networkd[1167]: eth0: Gained carrier Jun 25 18:36:14.446016 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:14.476313 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.29.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:36:14.770333 ignition[1093]: Ignition 2.19.0 Jun 25 18:36:14.770344 ignition[1093]: Stage: fetch-offline Jun 25 18:36:14.770553 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:14.780182 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:36:14.770561 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:14.771010 ignition[1093]: Ignition finished successfully Jun 25 18:36:14.811262 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:36:14.839472 ignition[1177]: Ignition 2.19.0 Jun 25 18:36:14.839487 ignition[1177]: Stage: fetch Jun 25 18:36:14.840058 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:14.840073 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:14.840190 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:14.865767 ignition[1177]: PUT result: OK Jun 25 18:36:14.871880 ignition[1177]: parsed url from cmdline: "" Jun 25 18:36:14.872088 ignition[1177]: no config URL provided Jun 25 18:36:14.872117 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:36:14.872131 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:36:14.872151 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:14.873565 ignition[1177]: PUT result: OK Jun 25 18:36:14.873836 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 18:36:14.875799 ignition[1177]: GET result: OK Jun 25 18:36:14.875865 ignition[1177]: parsing config with SHA512: 5c4472ca9bc6a2b0144331a8b435ec813796d85ee5b314f60aaae51c41134e09842c013cecb94e2d879b1bc57bf4601c67e08ecdd482acb14fc210eb84ea8c80 Jun 25 18:36:14.889957 unknown[1177]: fetched base config from "system" Jun 25 18:36:14.890203 unknown[1177]: fetched base config from "system" Jun 25 18:36:14.890909 ignition[1177]: fetch: fetch complete Jun 25 18:36:14.890216 unknown[1177]: fetched user config from "aws" Jun 25 18:36:14.890917 ignition[1177]: fetch: fetch passed Jun 25 18:36:14.891094 ignition[1177]: Ignition finished successfully Jun 25 18:36:14.895604 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:36:14.909171 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:36:14.939935 ignition[1184]: Ignition 2.19.0 Jun 25 18:36:14.939951 ignition[1184]: Stage: kargs Jun 25 18:36:14.941440 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:14.941455 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:14.941631 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:14.943853 ignition[1184]: PUT result: OK Jun 25 18:36:14.950619 ignition[1184]: kargs: kargs passed Jun 25 18:36:14.950745 ignition[1184]: Ignition finished successfully Jun 25 18:36:14.952129 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:36:14.963391 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:36:15.024259 ignition[1192]: Ignition 2.19.0 Jun 25 18:36:15.024280 ignition[1192]: Stage: disks Jun 25 18:36:15.025739 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:15.025758 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:15.032298 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:15.053895 ignition[1192]: PUT result: OK Jun 25 18:36:15.059580 ignition[1192]: disks: disks passed Jun 25 18:36:15.059651 ignition[1192]: Ignition finished successfully Jun 25 18:36:15.065502 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:36:15.069434 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:36:15.076277 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:36:15.080178 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:36:15.086594 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:36:15.088667 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:36:15.098342 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:36:15.166753 systemd-fsck[1201]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:36:15.171482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:36:15.181155 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:36:15.462088 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:36:15.463377 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:36:15.464324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:36:15.487255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:36:15.491091 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:36:15.496771 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:36:15.499016 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:36:15.499053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:36:15.510122 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:36:15.512918 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:36:15.539214 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1220) Jun 25 18:36:15.546070 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:15.546164 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:15.552081 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:15.565343 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:15.566497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:36:15.980577 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:36:16.002823 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:36:16.010303 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:36:16.016810 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:36:16.174280 systemd-networkd[1167]: eth0: Gained IPv6LL Jun 25 18:36:16.315005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:36:16.323580 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:36:16.326359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:36:16.345014 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:16.344949 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:36:16.392110 ignition[1333]: INFO : Ignition 2.19.0 Jun 25 18:36:16.392110 ignition[1333]: INFO : Stage: mount Jun 25 18:36:16.397392 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:16.397392 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:16.397392 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:16.403429 ignition[1333]: INFO : PUT result: OK Jun 25 18:36:16.405769 ignition[1333]: INFO : mount: mount passed Jun 25 18:36:16.407191 ignition[1333]: INFO : Ignition finished successfully Jun 25 18:36:16.408005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:36:16.412170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:36:16.428130 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:36:16.471327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:36:16.496007 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1345) Jun 25 18:36:16.499252 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:16.499320 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:16.499341 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:16.506010 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:16.508921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:36:16.539599 ignition[1362]: INFO : Ignition 2.19.0 Jun 25 18:36:16.539599 ignition[1362]: INFO : Stage: files Jun 25 18:36:16.545015 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:16.545015 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:16.545015 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:16.545015 ignition[1362]: INFO : PUT result: OK Jun 25 18:36:16.551050 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:36:16.566947 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:36:16.566947 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:36:16.644390 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:36:16.646181 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:36:16.648438 unknown[1362]: wrote ssh authorized keys file for user: core Jun 25 18:36:16.649825 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:36:16.653837 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:36:16.656581 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:36:16.708506 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:36:16.849006 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:36:16.849006 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:36:16.865622 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:36:16.865622 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:36:16.879815 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:36:16.879815 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:36:16.889565 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:36:16.889565 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:36:16.900409 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:36:16.900409 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:36:16.907033 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:36:16.907033 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:36:16.918891 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:36:16.918891 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:36:16.918891 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:36:17.331865 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:36:18.826301 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:36:18.833057 ignition[1362]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:36:18.839700 ignition[1362]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:36:18.846588 ignition[1362]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:36:18.846588 ignition[1362]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:36:18.846588 ignition[1362]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:36:18.866594 ignition[1362]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:36:18.866594 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:36:18.881464 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:36:18.881464 ignition[1362]: INFO : files: files passed Jun 25 18:36:18.881464 ignition[1362]: INFO : Ignition finished successfully Jun 25 18:36:18.872788 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:36:18.895311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:36:18.902073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:36:18.909643 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:36:18.909743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:36:18.928688 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:18.928688 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:18.935320 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:18.939196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:36:18.940806 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:36:18.955741 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:36:19.042310 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:36:19.042446 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:36:19.045533 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:36:19.051579 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:36:19.053159 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:36:19.061416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:36:19.084766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:36:19.095339 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:36:19.118360 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:19.118713 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:19.125311 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:36:19.126698 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:36:19.126825 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:36:19.138562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:36:19.143317 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:36:19.150126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:36:19.158658 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:36:19.160875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:36:19.166003 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:36:19.169123 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:36:19.173407 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:36:19.177576 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:36:19.182876 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:36:19.189405 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:36:19.190253 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:36:19.207368 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:19.209167 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:19.213767 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:36:19.213958 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:19.219375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:36:19.219663 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:36:19.225338 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:36:19.225613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:36:19.229744 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:36:19.230202 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:36:19.248723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:36:19.248850 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:36:19.249035 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:19.266410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:36:19.268233 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:36:19.270308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:19.278350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:36:19.278631 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:36:19.361006 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:36:19.361148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:36:19.372300 ignition[1415]: INFO : Ignition 2.19.0 Jun 25 18:36:19.372300 ignition[1415]: INFO : Stage: umount Jun 25 18:36:19.372300 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:19.372300 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:19.379972 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:19.379972 ignition[1415]: INFO : PUT result: OK Jun 25 18:36:19.393261 ignition[1415]: INFO : umount: umount passed Jun 25 18:36:19.393261 ignition[1415]: INFO : Ignition finished successfully Jun 25 18:36:19.391648 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:36:19.391791 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:36:19.397880 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:36:19.398232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:36:19.412737 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:36:19.412825 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:36:19.415114 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:36:19.415317 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:36:19.422144 systemd[1]: Stopped target network.target - Network. Jun 25 18:36:19.427326 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:36:19.427426 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:36:19.431404 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:36:19.433960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:36:19.435411 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:19.449025 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:36:19.451501 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:36:19.454228 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:36:19.454345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:36:19.457399 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:36:19.457679 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:36:19.460440 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:36:19.460509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:36:19.464416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:36:19.464881 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:36:19.481753 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:36:19.489748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:36:19.500573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:36:19.502853 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:36:19.510246 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:36:19.511200 systemd-networkd[1167]: eth0: DHCPv6 lease lost Jun 25 18:36:19.514187 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:36:19.514335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:36:19.523139 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:36:19.523946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:36:19.530380 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:36:19.530464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:19.533103 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:36:19.533278 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:36:19.544020 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:36:19.547255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:36:19.547353 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:36:19.564719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:36:19.564788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:19.566295 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:36:19.566350 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:19.567563 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:36:19.567611 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:19.571392 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:19.587580 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:36:19.588345 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:19.607334 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:36:19.607958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:36:19.610812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:36:19.611746 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:19.615796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:36:19.615853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:19.623833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:36:19.624011 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:36:19.631341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:36:19.631415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:36:19.635320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:36:19.635391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:19.648290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:36:19.651191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:36:19.651270 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:19.656674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:36:19.656804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:19.708831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:36:19.709018 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:36:19.714327 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:36:19.726307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:36:19.760685 systemd[1]: Switching root. Jun 25 18:36:19.814669 systemd-journald[178]: Journal stopped Jun 25 18:36:23.362724 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jun 25 18:36:23.362830 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:36:23.362852 kernel: SELinux: policy capability open_perms=1 Jun 25 18:36:23.362870 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:36:23.362889 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:36:23.364411 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:36:23.364468 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:36:23.364488 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:36:23.364506 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:36:23.364534 kernel: audit: type=1403 audit(1719340581.429:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:36:23.364561 systemd[1]: Successfully loaded SELinux policy in 101.187ms. Jun 25 18:36:23.364588 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.438ms. Jun 25 18:36:23.364611 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:36:23.364632 systemd[1]: Detected virtualization amazon. Jun 25 18:36:23.364651 systemd[1]: Detected architecture x86-64. Jun 25 18:36:23.364671 systemd[1]: Detected first boot. Jun 25 18:36:23.364690 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:36:23.364709 zram_generator::config[1458]: No configuration found. Jun 25 18:36:23.364734 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:36:23.364753 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:36:23.364774 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:36:23.364792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:36:23.364812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:36:23.364831 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:36:23.364851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:36:23.364876 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:36:23.364904 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:36:23.364923 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:36:23.364943 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:36:23.364962 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:36:23.374022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:23.374459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:23.374496 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:36:23.374517 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:36:23.374538 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:36:23.374566 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:36:23.374589 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:36:23.374615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:23.374639 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:36:23.374660 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:36:23.374678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:36:23.374792 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:36:23.374820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:23.374841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:36:23.374861 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:36:23.374880 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:36:23.374900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:36:23.374918 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:36:23.374938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:23.374961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:23.375003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:23.375031 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:36:23.375056 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:36:23.375078 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:36:23.375097 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:36:23.375119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:23.375196 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:36:23.375221 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:36:23.377324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:36:23.378028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:36:23.378097 systemd[1]: Reached target machines.target - Containers. Jun 25 18:36:23.378121 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:36:23.378258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:23.378286 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:36:23.378344 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:36:23.378369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:36:23.378446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:36:23.378504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:36:23.378532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:36:23.378590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:36:23.378720 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:36:23.378769 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:36:23.378793 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:36:23.378812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:36:23.378830 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:36:23.378848 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:36:23.378866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:36:23.378890 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:36:23.378913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:36:23.378935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:36:23.378957 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:36:23.378978 systemd[1]: Stopped verity-setup.service. Jun 25 18:36:23.379021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:23.379105 kernel: loop: module loaded Jun 25 18:36:23.379168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:36:23.379194 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:36:23.379221 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:36:23.379323 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:36:23.380129 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:36:23.380162 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:36:23.380182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:23.380208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:36:23.380227 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:36:23.380246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:36:23.381902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:36:23.381957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:36:23.382293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:36:23.382319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:36:23.382338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:36:23.382365 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:36:23.382386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:36:23.382410 kernel: fuse: init (API version 7.39) Jun 25 18:36:23.382434 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:36:23.382453 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:36:23.382472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:36:23.382498 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:36:23.382520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:36:23.382542 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:36:23.382565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:36:23.382626 systemd-journald[1532]: Collecting audit messages is disabled. Jun 25 18:36:23.382671 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:36:23.382694 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:36:23.382718 systemd-journald[1532]: Journal started Jun 25 18:36:23.388518 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec2c9c38a621b10ebd6ff4093ada2c58) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:36:23.388783 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:36:22.674547 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:36:22.725126 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 18:36:22.725611 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:36:23.399119 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:36:23.423717 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:36:23.423809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:23.446950 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:36:23.447105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:36:23.463028 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:36:23.475041 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:36:23.475167 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:36:23.480157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:23.483195 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:36:23.485030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:36:23.534740 kernel: ACPI: bus type drm_connector registered Jun 25 18:36:23.538024 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:36:23.539197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:36:23.565240 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:36:23.573818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:23.577129 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:36:23.581353 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:36:23.596728 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:36:23.643591 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec2c9c38a621b10ebd6ff4093ada2c58 is 146.181ms for 961 entries. Jun 25 18:36:23.643591 systemd-journald[1532]: System Journal (/var/log/journal/ec2c9c38a621b10ebd6ff4093ada2c58) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:36:23.818321 systemd-journald[1532]: Received client request to flush runtime journal. Jun 25 18:36:23.818368 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 18:36:23.818419 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:36:23.654877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:36:23.675166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:36:23.688312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:23.694231 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:36:23.769381 udevadm[1591]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:36:23.803840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:23.839720 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:36:23.869401 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:36:23.873301 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:36:23.894089 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:36:23.897651 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:36:23.915171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:36:23.928229 kernel: loop1: detected capacity change from 0 to 80568 Jun 25 18:36:23.987229 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jun 25 18:36:23.988354 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jun 25 18:36:24.018524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:24.110649 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 18:36:24.194052 kernel: loop3: detected capacity change from 0 to 60984 Jun 25 18:36:24.319179 kernel: loop4: detected capacity change from 0 to 139760 Jun 25 18:36:24.441994 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:36:24.472286 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 18:36:24.528027 kernel: loop7: detected capacity change from 0 to 60984 Jun 25 18:36:24.548907 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 18:36:24.549592 (sd-merge)[1608]: Merged extensions into '/usr'. Jun 25 18:36:24.557377 systemd[1]: Reloading requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:36:24.557662 systemd[1]: Reloading... Jun 25 18:36:24.724017 zram_generator::config[1632]: No configuration found. Jun 25 18:36:25.085151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:25.208692 systemd[1]: Reloading finished in 641 ms. Jun 25 18:36:25.238948 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:36:25.250495 systemd[1]: Starting ensure-sysext.service... Jun 25 18:36:25.253084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:36:25.290202 systemd[1]: Reloading requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:36:25.290221 systemd[1]: Reloading... Jun 25 18:36:25.299738 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:36:25.300319 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:36:25.302197 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:36:25.302595 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jun 25 18:36:25.302669 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jun 25 18:36:25.308405 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:36:25.308497 systemd-tmpfiles[1681]: Skipping /boot Jun 25 18:36:25.336298 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:36:25.336319 systemd-tmpfiles[1681]: Skipping /boot Jun 25 18:36:25.443018 zram_generator::config[1707]: No configuration found. Jun 25 18:36:25.600467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:25.671524 systemd[1]: Reloading finished in 380 ms. Jun 25 18:36:25.691301 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:36:25.696608 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:25.717407 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:25.732581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:36:25.743228 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:36:25.748431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:36:25.753244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:25.761750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:36:25.776251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.776666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:25.791362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:36:25.797007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:36:25.812317 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:36:25.814749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:25.814957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.816819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:36:25.818085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:36:25.842487 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:36:25.848925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.851443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:25.860385 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:36:25.861799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:25.862141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.863308 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:36:25.873459 systemd-udevd[1769]: Using default interface naming scheme 'v255'. Jun 25 18:36:25.883591 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:36:25.884586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:36:25.891687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.895070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:25.908187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:36:25.909543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:25.909859 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:36:25.912413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:25.913550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:36:25.913750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:36:25.918818 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:36:25.924705 systemd[1]: Finished ensure-sysext.service. Jun 25 18:36:25.946067 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:36:25.946319 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:36:25.954096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:36:25.954356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:36:25.957096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:36:25.963487 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:36:25.974885 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:36:25.981442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:25.998165 augenrules[1793]: No rules Jun 25 18:36:25.983390 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:26.000304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:36:26.007104 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:36:26.020270 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:36:26.021755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:36:26.067083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:36:26.069875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:36:26.097164 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:36:26.137240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:36:26.236790 (udev-worker)[1804]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:26.253019 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1812) Jun 25 18:36:26.272519 systemd-networkd[1806]: lo: Link UP Jun 25 18:36:26.272536 systemd-networkd[1806]: lo: Gained carrier Jun 25 18:36:26.279285 systemd-networkd[1806]: Enumeration completed Jun 25 18:36:26.279712 systemd-networkd[1806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:26.279723 systemd-networkd[1806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:36:26.280728 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:36:26.287908 systemd-resolved[1768]: Positive Trust Anchors: Jun 25 18:36:26.287930 systemd-resolved[1768]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:36:26.291065 systemd-resolved[1768]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:36:26.299182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:36:26.301628 systemd-networkd[1806]: eth0: Link UP Jun 25 18:36:26.301820 systemd-networkd[1806]: eth0: Gained carrier Jun 25 18:36:26.301855 systemd-networkd[1806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:26.307251 systemd-resolved[1768]: Defaulting to hostname 'linux'. Jun 25 18:36:26.308151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:36:26.316598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:36:26.318045 systemd[1]: Reached target network.target - Network. Jun 25 18:36:26.319200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:26.320814 systemd-networkd[1806]: eth0: DHCPv4 address 172.31.29.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:36:26.357024 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:36:26.357098 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 25 18:36:26.363007 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 18:36:26.367049 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 18:36:26.376144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:26.387012 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jun 25 18:36:26.396137 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1824) Jun 25 18:36:26.443019 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:36:26.444543 systemd-networkd[1806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:26.540533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:36:26.573337 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:36:26.694215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:26.703195 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:36:26.705427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:36:26.741437 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:36:26.765005 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:36:26.799099 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:36:26.800778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:26.802082 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:36:26.803644 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:36:26.805202 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:36:26.806932 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:36:26.808186 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:36:26.809748 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:36:26.812778 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:36:26.813124 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:36:26.814957 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:36:26.820783 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:36:26.824272 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:36:26.838382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:36:26.847947 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:36:26.850372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:36:26.852760 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:36:26.853964 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:36:26.855233 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:36:26.855273 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:36:26.866357 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:36:26.873007 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:36:26.882193 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:36:26.897312 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:36:26.903245 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:36:26.911378 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:36:26.921309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:36:26.930886 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:36:26.945581 systemd[1]: Started ntpd.service - Network Time Service. Jun 25 18:36:26.953151 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:36:26.996253 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 18:36:27.006933 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:36:27.012212 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:36:27.025444 jq[1938]: false Jun 25 18:36:27.024373 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:36:27.026277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:36:27.027042 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:36:27.035123 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:36:27.042214 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:36:27.046094 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:36:27.056378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:36:27.056632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:36:27.065574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:36:27.066000 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:36:27.105456 extend-filesystems[1939]: Found loop4 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found loop5 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found loop6 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found loop7 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p1 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p2 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p3 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found usr Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p4 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p6 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p7 Jun 25 18:36:27.119757 extend-filesystems[1939]: Found nvme0n1p9 Jun 25 18:36:27.119757 extend-filesystems[1939]: Checking size of /dev/nvme0n1p9 Jun 25 18:36:27.155668 jq[1952]: true Jun 25 18:36:27.178270 (ntainerd)[1971]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:36:27.227115 tar[1957]: linux-amd64/helm Jun 25 18:36:27.228657 dbus-daemon[1937]: [system] SELinux support is enabled Jun 25 18:36:27.229343 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:36:27.238215 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:36:27.243378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:36:27.244926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:36:27.244956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:36:27.276458 jq[1967]: true Jun 25 18:36:27.255781 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:36:27.261339 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:36:27.287465 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 18:36:27.290627 update_engine[1950]: I0625 18:36:27.290045 1950 main.cc:92] Flatcar Update Engine starting Jun 25 18:36:27.315686 extend-filesystems[1939]: Resized partition /dev/nvme0n1p9 Jun 25 18:36:27.309160 dbus-daemon[1937]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1806 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 18:36:27.333320 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 18:36:27.347827 update_engine[1950]: I0625 18:36:27.347149 1950 update_check_scheduler.cc:74] Next update check in 6m6s Jun 25 18:36:27.348201 extend-filesystems[1991]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:36:27.334729 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:36:27.345508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:36:27.356039 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: ---------------------------------------------------- Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: corporation. Support and training for ntp-4 are Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: available at https://www.nwtime.org/support Jun 25 18:36:27.375437 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: ---------------------------------------------------- Jun 25 18:36:27.374441 ntpd[1941]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:36:27.374467 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:36:27.374477 ntpd[1941]: ---------------------------------------------------- Jun 25 18:36:27.374487 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:36:27.374496 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:36:27.374505 ntpd[1941]: corporation. Support and training for ntp-4 are Jun 25 18:36:27.374514 ntpd[1941]: available at https://www.nwtime.org/support Jun 25 18:36:27.374525 ntpd[1941]: ---------------------------------------------------- Jun 25 18:36:27.386813 ntpd[1941]: proto: precision = 0.105 usec (-23) Jun 25 18:36:27.390147 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: proto: precision = 0.105 usec (-23) Jun 25 18:36:27.392699 ntpd[1941]: basedate set to 2024-06-13 Jun 25 18:36:27.392733 ntpd[1941]: gps base set to 2024-06-16 (week 2319) Jun 25 18:36:27.392907 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: basedate set to 2024-06-13 Jun 25 18:36:27.392907 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: gps base set to 2024-06-16 (week 2319) Jun 25 18:36:27.397865 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:36:27.405973 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:36:27.405973 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:36:27.405973 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:36:27.405973 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listen normally on 3 eth0 172.31.29.210:123 Jun 25 18:36:27.405498 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:36:27.405851 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:36:27.405921 ntpd[1941]: Listen normally on 3 eth0 172.31.29.210:123 Jun 25 18:36:27.410700 ntpd[1941]: Listen normally on 4 lo [::1]:123 Jun 25 18:36:27.420953 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listen normally on 4 lo [::1]:123 Jun 25 18:36:27.420953 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: bind(21) AF_INET6 fe80::4f0:3ff:fec2:7ed1%2#123 flags 0x11 failed: Cannot assign requested address Jun 25 18:36:27.420953 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: unable to create socket on eth0 (5) for fe80::4f0:3ff:fec2:7ed1%2#123 Jun 25 18:36:27.420953 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: failed to init interface for address fe80::4f0:3ff:fec2:7ed1%2 Jun 25 18:36:27.420953 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: Listening on routing socket on fd #21 for interface updates Jun 25 18:36:27.410814 ntpd[1941]: bind(21) AF_INET6 fe80::4f0:3ff:fec2:7ed1%2#123 flags 0x11 failed: Cannot assign requested address Jun 25 18:36:27.410928 ntpd[1941]: unable to create socket on eth0 (5) for fe80::4f0:3ff:fec2:7ed1%2#123 Jun 25 18:36:27.410946 ntpd[1941]: failed to init interface for address fe80::4f0:3ff:fec2:7ed1%2 Jun 25 18:36:27.411017 ntpd[1941]: Listening on routing socket on fd #21 for interface updates Jun 25 18:36:27.435291 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:27.454469 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:27.454469 ntpd[1941]: 25 Jun 18:36:27 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:27.454163 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:27.483200 coreos-metadata[1936]: Jun 25 18:36:27.483 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:36:27.489086 coreos-metadata[1936]: Jun 25 18:36:27.486 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 18:36:27.489579 coreos-metadata[1936]: Jun 25 18:36:27.489 INFO Fetch successful Jun 25 18:36:27.489669 coreos-metadata[1936]: Jun 25 18:36:27.489 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 18:36:27.491218 coreos-metadata[1936]: Jun 25 18:36:27.491 INFO Fetch successful Jun 25 18:36:27.491290 coreos-metadata[1936]: Jun 25 18:36:27.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 18:36:27.492245 coreos-metadata[1936]: Jun 25 18:36:27.492 INFO Fetch successful Jun 25 18:36:27.492594 coreos-metadata[1936]: Jun 25 18:36:27.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 18:36:27.495438 coreos-metadata[1936]: Jun 25 18:36:27.495 INFO Fetch successful Jun 25 18:36:27.495438 coreos-metadata[1936]: Jun 25 18:36:27.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 18:36:27.496159 coreos-metadata[1936]: Jun 25 18:36:27.496 INFO Fetch failed with 404: resource not found Jun 25 18:36:27.496230 coreos-metadata[1936]: Jun 25 18:36:27.496 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 18:36:27.501005 coreos-metadata[1936]: Jun 25 18:36:27.498 INFO Fetch successful Jun 25 18:36:27.501005 coreos-metadata[1936]: Jun 25 18:36:27.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 18:36:27.501005 coreos-metadata[1936]: Jun 25 18:36:27.500 INFO Fetch successful Jun 25 18:36:27.501005 coreos-metadata[1936]: Jun 25 18:36:27.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 18:36:27.656907 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1799) Jun 25 18:36:27.628581 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:36:27.657416 coreos-metadata[1936]: Jun 25 18:36:27.501 INFO Fetch successful Jun 25 18:36:27.657416 coreos-metadata[1936]: Jun 25 18:36:27.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 18:36:27.657416 coreos-metadata[1936]: Jun 25 18:36:27.503 INFO Fetch successful Jun 25 18:36:27.657416 coreos-metadata[1936]: Jun 25 18:36:27.503 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 18:36:27.657416 coreos-metadata[1936]: Jun 25 18:36:27.508 INFO Fetch successful Jun 25 18:36:27.633345 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:36:27.671553 systemd-logind[1946]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:36:27.671584 systemd-logind[1946]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 25 18:36:27.671609 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:36:27.691251 systemd-logind[1946]: New seat seat0. Jun 25 18:36:27.701354 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 18:36:27.751269 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:36:27.844015 extend-filesystems[1991]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 18:36:27.844015 extend-filesystems[1991]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:36:27.844015 extend-filesystems[1991]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 18:36:27.876621 extend-filesystems[1939]: Resized filesystem in /dev/nvme0n1p9 Jun 25 18:36:27.867180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:36:27.881993 bash[2010]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:36:27.871029 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:36:27.871284 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:36:27.897622 systemd[1]: Starting sshkeys.service... Jun 25 18:36:27.938806 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:36:27.945901 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 18:36:27.948588 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 18:36:27.951527 dbus-daemon[1937]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1990 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 18:36:27.960398 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 18:36:27.991712 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:36:28.003256 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:36:28.010468 polkitd[2057]: Started polkitd version 121 Jun 25 18:36:28.014170 systemd-networkd[1806]: eth0: Gained IPv6LL Jun 25 18:36:28.027979 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:36:28.031409 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:36:28.043460 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 18:36:28.057426 polkitd[2057]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 18:36:28.057526 polkitd[2057]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 18:36:28.059932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:28.066394 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:36:28.073603 polkitd[2057]: Finished loading, compiling and executing 2 rules Jun 25 18:36:28.086232 dbus-daemon[1937]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 18:36:28.086450 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 18:36:28.105874 polkitd[2057]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 18:36:28.200721 systemd-hostnamed[1990]: Hostname set to (transient) Jun 25 18:36:28.202726 systemd-resolved[1768]: System hostname changed to 'ip-172-31-29-210'. Jun 25 18:36:28.255744 coreos-metadata[2070]: Jun 25 18:36:28.255 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:36:28.265419 coreos-metadata[2070]: Jun 25 18:36:28.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 18:36:28.265536 coreos-metadata[2070]: Jun 25 18:36:28.265 INFO Fetch successful Jun 25 18:36:28.265536 coreos-metadata[2070]: Jun 25 18:36:28.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:36:28.268062 coreos-metadata[2070]: Jun 25 18:36:28.268 INFO Fetch successful Jun 25 18:36:28.273304 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:36:28.291886 unknown[2070]: wrote ssh authorized keys file for user: core Jun 25 18:36:28.333998 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:36:28.455405 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:36:28.453669 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:36:28.462474 systemd[1]: Finished sshkeys.service. Jun 25 18:36:28.479530 amazon-ssm-agent[2082]: Initializing new seelog logger Jun 25 18:36:28.481367 amazon-ssm-agent[2082]: New Seelog Logger Creation Complete Jun 25 18:36:28.481809 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.482508 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.488637 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 processing appconfig overrides Jun 25 18:36:28.492520 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.494102 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO Proxy environment variables: Jun 25 18:36:28.494304 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.496012 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 processing appconfig overrides Jun 25 18:36:28.496012 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.496012 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.496012 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 processing appconfig overrides Jun 25 18:36:28.512124 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.512124 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:28.512259 amazon-ssm-agent[2082]: 2024/06/25 18:36:28 processing appconfig overrides Jun 25 18:36:28.598003 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO http_proxy: Jun 25 18:36:28.629546 containerd[1971]: time="2024-06-25T18:36:28.629443380Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:36:28.694727 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO no_proxy: Jun 25 18:36:28.797059 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO https_proxy: Jun 25 18:36:28.822449 containerd[1971]: time="2024-06-25T18:36:28.822176770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:36:28.822449 containerd[1971]: time="2024-06-25T18:36:28.822235211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.828322 containerd[1971]: time="2024-06-25T18:36:28.827399313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:28.828322 containerd[1971]: time="2024-06-25T18:36:28.827452197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.828322 containerd[1971]: time="2024-06-25T18:36:28.827729170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:28.828322 containerd[1971]: time="2024-06-25T18:36:28.827761249Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:36:28.828322 containerd[1971]: time="2024-06-25T18:36:28.827860357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.831556 containerd[1971]: time="2024-06-25T18:36:28.827993188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:28.831556 containerd[1971]: time="2024-06-25T18:36:28.831395076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.833094 containerd[1971]: time="2024-06-25T18:36:28.833061111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833389533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833425482Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833442668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833622731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833645028Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833712572Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:36:28.833973 containerd[1971]: time="2024-06-25T18:36:28.833727269Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851116695Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851175120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851195721Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851236051Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851257067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851272379Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851292073Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851469085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851490675Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851510681Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851531832Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:36:28.851586 containerd[1971]: time="2024-06-25T18:36:28.851552216Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852144 containerd[1971]: time="2024-06-25T18:36:28.852060765Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852144 containerd[1971]: time="2024-06-25T18:36:28.852124577Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852218 containerd[1971]: time="2024-06-25T18:36:28.852145817Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852218 containerd[1971]: time="2024-06-25T18:36:28.852174642Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852218 containerd[1971]: time="2024-06-25T18:36:28.852196179Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852358 containerd[1971]: time="2024-06-25T18:36:28.852215157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.852358 containerd[1971]: time="2024-06-25T18:36:28.852234069Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:36:28.852425 containerd[1971]: time="2024-06-25T18:36:28.852399699Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:36:28.856465 containerd[1971]: time="2024-06-25T18:36:28.856328944Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:36:28.856465 containerd[1971]: time="2024-06-25T18:36:28.856388067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.856465 containerd[1971]: time="2024-06-25T18:36:28.856410080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857521888Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857640558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857664962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857836019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857857005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857879896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857900558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857920501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857938679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.857957945Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.858160647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.858184450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.858203768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.858224373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.858642 containerd[1971]: time="2024-06-25T18:36:28.858245045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.859307 containerd[1971]: time="2024-06-25T18:36:28.858267380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.859307 containerd[1971]: time="2024-06-25T18:36:28.858286541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.859307 containerd[1971]: time="2024-06-25T18:36:28.858303994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:36:28.859423 containerd[1971]: time="2024-06-25T18:36:28.858682207Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:36:28.859423 containerd[1971]: time="2024-06-25T18:36:28.858761226Z" level=info msg="Connect containerd service" Jun 25 18:36:28.859423 containerd[1971]: time="2024-06-25T18:36:28.858801761Z" level=info msg="using legacy CRI server" Jun 25 18:36:28.859423 containerd[1971]: time="2024-06-25T18:36:28.858814068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:36:28.859423 containerd[1971]: time="2024-06-25T18:36:28.858949962Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.868864493Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.868949296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.868975768Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869009461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869027713Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869355522Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869403079Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869445313Z" level=info msg="Start subscribing containerd event" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869492433Z" level=info msg="Start recovering state" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869570846Z" level=info msg="Start event monitor" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869584070Z" level=info msg="Start snapshots syncer" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869596991Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:36:28.873015 containerd[1971]: time="2024-06-25T18:36:28.869608196Z" level=info msg="Start streaming server" Jun 25 18:36:28.869774 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:36:28.878025 containerd[1971]: time="2024-06-25T18:36:28.876643065Z" level=info msg="containerd successfully booted in 0.259617s" Jun 25 18:36:28.898008 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO Checking if agent identity type OnPrem can be assumed Jun 25 18:36:28.996451 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO Checking if agent identity type EC2 can be assumed Jun 25 18:36:29.097277 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO Agent will take identity from EC2 Jun 25 18:36:29.196383 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:29.297513 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:29.325651 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:29.325651 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 18:36:29.325651 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [Registrar] Starting registrar module Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:28 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:29 INFO [EC2Identity] EC2 registration was successful. Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:29 INFO [CredentialRefresher] credentialRefresher has started Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:29 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 18:36:29.325859 amazon-ssm-agent[2082]: 2024-06-25 18:36:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 18:36:29.405547 amazon-ssm-agent[2082]: 2024-06-25 18:36:29 INFO [CredentialRefresher] Next credential rotation will be in 32.366659040916666 minutes Jun 25 18:36:29.432447 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:36:29.496853 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:36:29.512915 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:36:29.524688 systemd[1]: Started sshd@0-172.31.29.210:22-139.178.68.195:40766.service - OpenSSH per-connection server daemon (139.178.68.195:40766). Jun 25 18:36:29.547555 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:36:29.547829 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:36:29.561389 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:36:29.592574 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:36:29.606232 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:36:29.617141 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:36:29.619800 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:36:29.749820 tar[1957]: linux-amd64/LICENSE Jun 25 18:36:29.752228 tar[1957]: linux-amd64/README.md Jun 25 18:36:29.777751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:36:29.821223 sshd[2174]: Accepted publickey for core from 139.178.68.195 port 40766 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:29.823293 sshd[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:29.842162 systemd-logind[1946]: New session 1 of user core. Jun 25 18:36:29.845490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:36:29.858482 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:36:29.896965 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:36:29.907499 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:36:29.927310 (systemd)[2188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:30.202864 systemd[2188]: Queued start job for default target default.target. Jun 25 18:36:30.208753 systemd[2188]: Created slice app.slice - User Application Slice. Jun 25 18:36:30.208800 systemd[2188]: Reached target paths.target - Paths. Jun 25 18:36:30.208821 systemd[2188]: Reached target timers.target - Timers. Jun 25 18:36:30.212221 systemd[2188]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:36:30.232292 systemd[2188]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:36:30.232715 systemd[2188]: Reached target sockets.target - Sockets. Jun 25 18:36:30.232994 systemd[2188]: Reached target basic.target - Basic System. Jun 25 18:36:30.233326 systemd[2188]: Reached target default.target - Main User Target. Jun 25 18:36:30.234181 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:36:30.234445 systemd[2188]: Startup finished in 296ms. Jun 25 18:36:30.243303 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:36:30.355801 amazon-ssm-agent[2082]: 2024-06-25 18:36:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 18:36:30.375351 ntpd[1941]: Listen normally on 6 eth0 [fe80::4f0:3ff:fec2:7ed1%2]:123 Jun 25 18:36:30.375896 ntpd[1941]: 25 Jun 18:36:30 ntpd[1941]: Listen normally on 6 eth0 [fe80::4f0:3ff:fec2:7ed1%2]:123 Jun 25 18:36:30.417569 systemd[1]: Started sshd@1-172.31.29.210:22-139.178.68.195:40782.service - OpenSSH per-connection server daemon (139.178.68.195:40782). Jun 25 18:36:30.459839 amazon-ssm-agent[2082]: 2024-06-25 18:36:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2199) started Jun 25 18:36:30.559950 amazon-ssm-agent[2082]: 2024-06-25 18:36:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 18:36:30.647856 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 40782 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:30.655260 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:30.682294 systemd-logind[1946]: New session 2 of user core. Jun 25 18:36:30.685191 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:36:30.754243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:30.759634 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:36:30.761237 systemd[1]: Startup finished in 887ms (kernel) + 11.634s (initrd) + 9.431s (userspace) = 21.953s. Jun 25 18:36:30.771780 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:30.869506 sshd[2201]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:30.876655 systemd[1]: sshd@1-172.31.29.210:22-139.178.68.195:40782.service: Deactivated successfully. Jun 25 18:36:30.878089 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:36:30.880005 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:36:30.909367 systemd-logind[1946]: Removed session 2. Jun 25 18:36:30.914487 systemd[1]: Started sshd@2-172.31.29.210:22-139.178.68.195:40798.service - OpenSSH per-connection server daemon (139.178.68.195:40798). Jun 25 18:36:31.097653 sshd[2225]: Accepted publickey for core from 139.178.68.195 port 40798 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:31.098937 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:31.105294 systemd-logind[1946]: New session 3 of user core. Jun 25 18:36:31.112524 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:36:31.237458 sshd[2225]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:31.247498 systemd[1]: sshd@2-172.31.29.210:22-139.178.68.195:40798.service: Deactivated successfully. Jun 25 18:36:31.250830 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:36:31.252826 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:36:31.278017 systemd[1]: Started sshd@3-172.31.29.210:22-139.178.68.195:40810.service - OpenSSH per-connection server daemon (139.178.68.195:40810). Jun 25 18:36:31.282024 systemd-logind[1946]: Removed session 3. Jun 25 18:36:31.489070 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 40810 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:31.494663 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:31.509860 systemd-logind[1946]: New session 4 of user core. Jun 25 18:36:31.513209 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:36:31.642426 sshd[2237]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:31.650065 systemd[1]: sshd@3-172.31.29.210:22-139.178.68.195:40810.service: Deactivated successfully. Jun 25 18:36:31.653246 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:36:31.657835 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:36:31.660901 systemd-logind[1946]: Removed session 4. Jun 25 18:36:31.684678 systemd[1]: Started sshd@4-172.31.29.210:22-139.178.68.195:40818.service - OpenSSH per-connection server daemon (139.178.68.195:40818). Jun 25 18:36:31.884634 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 40818 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:31.885840 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:31.893994 systemd-logind[1946]: New session 5 of user core. Jun 25 18:36:31.898919 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:36:32.071867 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:36:32.072262 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:32.104139 sudo[2249]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:32.129184 kubelet[2216]: E0625 18:36:32.129050 2216 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:32.130354 sshd[2245]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:32.134366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:32.135477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:32.136126 systemd[1]: kubelet.service: Consumed 1.114s CPU time. Jun 25 18:36:32.136765 systemd[1]: sshd@4-172.31.29.210:22-139.178.68.195:40818.service: Deactivated successfully. Jun 25 18:36:32.140821 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:36:32.142686 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:36:32.144561 systemd-logind[1946]: Removed session 5. Jun 25 18:36:32.172301 systemd[1]: Started sshd@5-172.31.29.210:22-139.178.68.195:40832.service - OpenSSH per-connection server daemon (139.178.68.195:40832). Jun 25 18:36:32.328647 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 40832 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:32.330201 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:32.342146 systemd-logind[1946]: New session 6 of user core. Jun 25 18:36:32.351271 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:36:32.461237 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:36:32.461751 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:32.469836 sudo[2259]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:32.481312 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:36:32.482055 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:32.501430 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:32.514329 auditctl[2262]: No rules Jun 25 18:36:32.515752 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:36:32.517407 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:32.527349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:32.590364 augenrules[2280]: No rules Jun 25 18:36:32.592223 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:32.594867 sudo[2258]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:32.619919 sshd[2255]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:32.631054 systemd[1]: sshd@5-172.31.29.210:22-139.178.68.195:40832.service: Deactivated successfully. Jun 25 18:36:32.640786 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:36:32.646241 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:36:32.670616 systemd[1]: Started sshd@6-172.31.29.210:22-139.178.68.195:40844.service - OpenSSH per-connection server daemon (139.178.68.195:40844). Jun 25 18:36:32.672612 systemd-logind[1946]: Removed session 6. Jun 25 18:36:32.844759 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 40844 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:32.846612 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:32.857539 systemd-logind[1946]: New session 7 of user core. Jun 25 18:36:32.872126 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:36:32.979751 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:36:32.980131 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:33.253845 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:36:33.254856 (dockerd)[2301]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:36:34.097197 dockerd[2301]: time="2024-06-25T18:36:34.097129485Z" level=info msg="Starting up" Jun 25 18:36:35.238717 systemd-resolved[1768]: Clock change detected. Flushing caches. Jun 25 18:36:35.560742 dockerd[2301]: time="2024-06-25T18:36:35.560496137Z" level=info msg="Loading containers: start." Jun 25 18:36:35.748880 kernel: Initializing XFRM netlink socket Jun 25 18:36:35.796356 (udev-worker)[2359]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:35.907428 systemd-networkd[1806]: docker0: Link UP Jun 25 18:36:35.931365 dockerd[2301]: time="2024-06-25T18:36:35.931322047Z" level=info msg="Loading containers: done." Jun 25 18:36:36.129044 dockerd[2301]: time="2024-06-25T18:36:36.128990096Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:36:36.129557 dockerd[2301]: time="2024-06-25T18:36:36.129235744Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:36:36.129557 dockerd[2301]: time="2024-06-25T18:36:36.129373686Z" level=info msg="Daemon has completed initialization" Jun 25 18:36:36.198742 dockerd[2301]: time="2024-06-25T18:36:36.197874363Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:36:36.197987 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:36:37.736684 containerd[1971]: time="2024-06-25T18:36:37.736631077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:36:38.654918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711421530.mount: Deactivated successfully. Jun 25 18:36:42.641248 containerd[1971]: time="2024-06-25T18:36:42.641187469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:42.643477 containerd[1971]: time="2024-06-25T18:36:42.643416046Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 18:36:42.647869 containerd[1971]: time="2024-06-25T18:36:42.647200611Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:42.666840 containerd[1971]: time="2024-06-25T18:36:42.666787010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:42.674330 containerd[1971]: time="2024-06-25T18:36:42.674271522Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 4.937578119s" Jun 25 18:36:42.674330 containerd[1971]: time="2024-06-25T18:36:42.674336856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:36:42.742592 containerd[1971]: time="2024-06-25T18:36:42.742550502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:36:43.248688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:36:43.254413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:44.830492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:44.845323 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:44.943973 kubelet[2502]: E0625 18:36:44.943875 2502 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:44.949232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:44.949434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:48.119127 containerd[1971]: time="2024-06-25T18:36:48.119061010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:48.179700 containerd[1971]: time="2024-06-25T18:36:48.179603479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 18:36:48.242299 containerd[1971]: time="2024-06-25T18:36:48.242204945Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:48.323292 containerd[1971]: time="2024-06-25T18:36:48.323197695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:48.324687 containerd[1971]: time="2024-06-25T18:36:48.324588573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 5.581989933s" Jun 25 18:36:48.324687 containerd[1971]: time="2024-06-25T18:36:48.324640353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:36:48.369436 containerd[1971]: time="2024-06-25T18:36:48.369289987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:36:51.131271 containerd[1971]: time="2024-06-25T18:36:51.131213874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.133866 containerd[1971]: time="2024-06-25T18:36:51.133785093Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 18:36:51.136248 containerd[1971]: time="2024-06-25T18:36:51.136179529Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.140816 containerd[1971]: time="2024-06-25T18:36:51.140747562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.143574 containerd[1971]: time="2024-06-25T18:36:51.143341565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 2.774010123s" Jun 25 18:36:51.143574 containerd[1971]: time="2024-06-25T18:36:51.143394618Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:36:51.170956 containerd[1971]: time="2024-06-25T18:36:51.170911789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:36:53.838012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222693852.mount: Deactivated successfully. Jun 25 18:36:54.844914 containerd[1971]: time="2024-06-25T18:36:54.844851526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.847098 containerd[1971]: time="2024-06-25T18:36:54.846891379Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 18:36:54.849138 containerd[1971]: time="2024-06-25T18:36:54.849070701Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.853054 containerd[1971]: time="2024-06-25T18:36:54.852980713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.854190 containerd[1971]: time="2024-06-25T18:36:54.853723275Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 3.682760903s" Jun 25 18:36:54.854190 containerd[1971]: time="2024-06-25T18:36:54.853763941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:36:54.881269 containerd[1971]: time="2024-06-25T18:36:54.881227093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:36:55.199884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:36:55.211990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:55.789083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112774145.mount: Deactivated successfully. Jun 25 18:36:55.823175 containerd[1971]: time="2024-06-25T18:36:55.821861882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:55.825164 containerd[1971]: time="2024-06-25T18:36:55.825084211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:36:55.830760 containerd[1971]: time="2024-06-25T18:36:55.830705819Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:55.836814 containerd[1971]: time="2024-06-25T18:36:55.835224431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:55.837215 containerd[1971]: time="2024-06-25T18:36:55.837176710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 955.717531ms" Jun 25 18:36:55.837298 containerd[1971]: time="2024-06-25T18:36:55.837241061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:36:55.855016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:55.865153 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:55.894323 containerd[1971]: time="2024-06-25T18:36:55.893934451Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:36:55.956127 kubelet[2557]: E0625 18:36:55.956081 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:55.959169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:55.959452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:56.584903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609866654.mount: Deactivated successfully. Jun 25 18:36:59.100863 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 18:36:59.773178 containerd[1971]: time="2024-06-25T18:36:59.773120163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:59.775635 containerd[1971]: time="2024-06-25T18:36:59.775357747Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:36:59.778210 containerd[1971]: time="2024-06-25T18:36:59.777681503Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:59.782198 containerd[1971]: time="2024-06-25T18:36:59.782152812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:59.783530 containerd[1971]: time="2024-06-25T18:36:59.783485569Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.889504229s" Jun 25 18:36:59.783648 containerd[1971]: time="2024-06-25T18:36:59.783539635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:36:59.814324 containerd[1971]: time="2024-06-25T18:36:59.814287518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:37:00.497350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294309857.mount: Deactivated successfully. Jun 25 18:37:01.542104 containerd[1971]: time="2024-06-25T18:37:01.542039504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:01.543422 containerd[1971]: time="2024-06-25T18:37:01.543363991Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 18:37:01.546601 containerd[1971]: time="2024-06-25T18:37:01.546557030Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:01.556777 containerd[1971]: time="2024-06-25T18:37:01.555518927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:01.558127 containerd[1971]: time="2024-06-25T18:37:01.557906529Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.743575472s" Jun 25 18:37:01.558127 containerd[1971]: time="2024-06-25T18:37:01.557961471Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:37:06.021362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:37:06.041480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:06.579026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:06.591115 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:37:06.705889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:06.721053 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:37:06.721395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:06.740202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:06.770190 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Jun 25 18:37:06.770212 systemd[1]: Reloading... Jun 25 18:37:06.976277 zram_generator::config[2746]: No configuration found. Jun 25 18:37:07.188373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:37:07.336053 systemd[1]: Reloading finished in 564 ms. Jun 25 18:37:07.407243 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:37:07.407457 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:37:07.407866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:07.414189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:08.729906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:08.752479 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:37:08.822471 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:08.822471 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:37:08.822471 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:08.833211 kubelet[2803]: I0625 18:37:08.833065 2803 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:37:09.819182 kubelet[2803]: I0625 18:37:09.819146 2803 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:37:09.819182 kubelet[2803]: I0625 18:37:09.819178 2803 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:37:09.821683 kubelet[2803]: I0625 18:37:09.819981 2803 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:37:09.854311 kubelet[2803]: I0625 18:37:09.854279 2803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:37:09.860158 kubelet[2803]: E0625 18:37:09.859922 2803 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.870844 kubelet[2803]: I0625 18:37:09.870803 2803 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:37:09.876491 kubelet[2803]: I0625 18:37:09.876437 2803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:37:09.876764 kubelet[2803]: I0625 18:37:09.876737 2803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:37:09.876927 kubelet[2803]: I0625 18:37:09.876770 2803 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:37:09.876927 kubelet[2803]: I0625 18:37:09.876786 2803 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:37:09.878035 kubelet[2803]: I0625 18:37:09.878000 2803 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:09.884994 kubelet[2803]: I0625 18:37:09.884955 2803 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:37:09.885639 kubelet[2803]: I0625 18:37:09.885176 2803 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:37:09.885639 kubelet[2803]: I0625 18:37:09.885222 2803 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:37:09.885639 kubelet[2803]: I0625 18:37:09.885239 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:37:09.888613 kubelet[2803]: W0625 18:37:09.888322 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.888613 kubelet[2803]: E0625 18:37:09.888389 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.888613 kubelet[2803]: W0625 18:37:09.888466 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.888613 kubelet[2803]: E0625 18:37:09.888519 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.889835 kubelet[2803]: I0625 18:37:09.889342 2803 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:37:09.894705 kubelet[2803]: W0625 18:37:09.894128 2803 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:37:09.895859 kubelet[2803]: I0625 18:37:09.895834 2803 server.go:1232] "Started kubelet" Jun 25 18:37:09.896238 kubelet[2803]: I0625 18:37:09.896207 2803 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:37:09.897049 kubelet[2803]: I0625 18:37:09.897020 2803 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:37:09.901196 kubelet[2803]: I0625 18:37:09.900470 2803 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:37:09.901196 kubelet[2803]: I0625 18:37:09.900829 2803 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:37:09.902116 kubelet[2803]: E0625 18:37:09.901952 2803 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-210.17dc533184d5e5e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-210", UID:"ip-172-31-29-210", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-210"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 37, 9, 895800293, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 37, 9, 895800293, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-210"}': 'Post "https://172.31.29.210:6443/api/v1/namespaces/default/events": dial tcp 172.31.29.210:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:37:09.903986 kubelet[2803]: I0625 18:37:09.902420 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:37:09.905406 kubelet[2803]: E0625 18:37:09.905348 2803 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:37:09.905599 kubelet[2803]: E0625 18:37:09.905420 2803 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:37:09.906008 kubelet[2803]: E0625 18:37:09.905990 2803 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-29-210\" not found" Jun 25 18:37:09.906084 kubelet[2803]: I0625 18:37:09.906031 2803 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:37:09.906167 kubelet[2803]: I0625 18:37:09.906153 2803 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:37:09.906324 kubelet[2803]: I0625 18:37:09.906313 2803 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:37:09.907295 kubelet[2803]: W0625 18:37:09.906836 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.907295 kubelet[2803]: E0625 18:37:09.906891 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.908259 kubelet[2803]: E0625 18:37:09.908139 2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="200ms" Jun 25 18:37:09.948864 kubelet[2803]: I0625 18:37:09.948714 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:37:09.952899 kubelet[2803]: I0625 18:37:09.952867 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:37:09.952899 kubelet[2803]: I0625 18:37:09.952896 2803 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:37:09.952899 kubelet[2803]: I0625 18:37:09.952918 2803 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:37:09.953304 kubelet[2803]: E0625 18:37:09.952975 2803 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:37:09.962559 kubelet[2803]: W0625 18:37:09.962119 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.962559 kubelet[2803]: E0625 18:37:09.962189 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:09.974292 kubelet[2803]: I0625 18:37:09.974177 2803 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:37:09.974292 kubelet[2803]: I0625 18:37:09.974269 2803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:37:09.974292 kubelet[2803]: I0625 18:37:09.974293 2803 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:09.978517 kubelet[2803]: I0625 18:37:09.978485 2803 policy_none.go:49] "None policy: Start" Jun 25 18:37:09.979265 kubelet[2803]: I0625 18:37:09.979242 2803 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:37:09.979397 kubelet[2803]: I0625 18:37:09.979270 2803 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:37:09.991696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:37:10.002633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:37:10.009014 kubelet[2803]: I0625 18:37:10.008979 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:10.010326 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:37:10.010983 kubelet[2803]: E0625 18:37:10.010903 2803 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Jun 25 18:37:10.019073 kubelet[2803]: I0625 18:37:10.019047 2803 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:37:10.019376 kubelet[2803]: I0625 18:37:10.019346 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:37:10.022793 kubelet[2803]: E0625 18:37:10.022766 2803 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-210\" not found" Jun 25 18:37:10.055965 kubelet[2803]: I0625 18:37:10.054507 2803 topology_manager.go:215] "Topology Admit Handler" podUID="2e774276867cf009fffecbe50d531463" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-210" Jun 25 18:37:10.057586 kubelet[2803]: I0625 18:37:10.057531 2803 topology_manager.go:215] "Topology Admit Handler" podUID="932b9aa7db56fa09be8cfb17bb76cf9d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.059798 kubelet[2803]: I0625 18:37:10.059775 2803 topology_manager.go:215] "Topology Admit Handler" podUID="ebe5efd95d45ab7fd117ea14fe772f3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-210" Jun 25 18:37:10.069577 systemd[1]: Created slice kubepods-burstable-pod2e774276867cf009fffecbe50d531463.slice - libcontainer container kubepods-burstable-pod2e774276867cf009fffecbe50d531463.slice. Jun 25 18:37:10.086352 systemd[1]: Created slice kubepods-burstable-pod932b9aa7db56fa09be8cfb17bb76cf9d.slice - libcontainer container kubepods-burstable-pod932b9aa7db56fa09be8cfb17bb76cf9d.slice. Jun 25 18:37:10.088605 kubelet[2803]: W0625 18:37:10.088565 2803 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod932b9aa7db56fa09be8cfb17bb76cf9d.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod932b9aa7db56fa09be8cfb17bb76cf9d.slice/cpuset.cpus.effective: no such device Jun 25 18:37:10.099416 systemd[1]: Created slice kubepods-burstable-podebe5efd95d45ab7fd117ea14fe772f3e.slice - libcontainer container kubepods-burstable-podebe5efd95d45ab7fd117ea14fe772f3e.slice. Jun 25 18:37:10.106932 kubelet[2803]: I0625 18:37:10.106900 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-ca-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:10.106932 kubelet[2803]: I0625 18:37:10.106946 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.106932 kubelet[2803]: I0625 18:37:10.107067 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.106932 kubelet[2803]: I0625 18:37:10.107151 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:10.106932 kubelet[2803]: I0625 18:37:10.107216 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:10.108152 kubelet[2803]: I0625 18:37:10.107893 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.108152 kubelet[2803]: I0625 18:37:10.107950 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.108152 kubelet[2803]: I0625 18:37:10.108014 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:10.108152 kubelet[2803]: I0625 18:37:10.108047 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebe5efd95d45ab7fd117ea14fe772f3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-210\" (UID: \"ebe5efd95d45ab7fd117ea14fe772f3e\") " pod="kube-system/kube-scheduler-ip-172-31-29-210" Jun 25 18:37:10.109018 kubelet[2803]: E0625 18:37:10.108896 2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="400ms" Jun 25 18:37:10.213164 kubelet[2803]: I0625 18:37:10.213124 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:10.213632 kubelet[2803]: E0625 18:37:10.213519 2803 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Jun 25 18:37:10.386143 containerd[1971]: time="2024-06-25T18:37:10.386011474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-210,Uid:2e774276867cf009fffecbe50d531463,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.406810 containerd[1971]: time="2024-06-25T18:37:10.405041532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-210,Uid:ebe5efd95d45ab7fd117ea14fe772f3e,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.406810 containerd[1971]: time="2024-06-25T18:37:10.405043420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-210,Uid:932b9aa7db56fa09be8cfb17bb76cf9d,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.510002 kubelet[2803]: E0625 18:37:10.509967 2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="800ms" Jun 25 18:37:10.616271 kubelet[2803]: I0625 18:37:10.615752 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:10.616271 kubelet[2803]: E0625 18:37:10.616233 2803 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Jun 25 18:37:10.860550 kubelet[2803]: W0625 18:37:10.860437 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:10.860550 kubelet[2803]: E0625 18:37:10.860552 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:10.979110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941937944.mount: Deactivated successfully. Jun 25 18:37:10.994954 containerd[1971]: time="2024-06-25T18:37:10.994899766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:10.997042 containerd[1971]: time="2024-06-25T18:37:10.996926867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:37:10.999860 containerd[1971]: time="2024-06-25T18:37:10.999810530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.001960 containerd[1971]: time="2024-06-25T18:37:11.001922048Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.004337 containerd[1971]: time="2024-06-25T18:37:11.004027108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:37:11.009472 containerd[1971]: time="2024-06-25T18:37:11.007048535Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.011019 containerd[1971]: time="2024-06-25T18:37:11.010893643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:37:11.016255 containerd[1971]: time="2024-06-25T18:37:11.014817255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.016255 containerd[1971]: time="2024-06-25T18:37:11.015706745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.543779ms" Jun 25 18:37:11.017498 containerd[1971]: time="2024-06-25T18:37:11.017453621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.40372ms" Jun 25 18:37:11.019921 containerd[1971]: time="2024-06-25T18:37:11.019884779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.716874ms" Jun 25 18:37:11.152617 kubelet[2803]: W0625 18:37:11.152409 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.152617 kubelet[2803]: E0625 18:37:11.152474 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.263864 kubelet[2803]: W0625 18:37:11.263826 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.263864 kubelet[2803]: E0625 18:37:11.263870 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.310876 kubelet[2803]: E0625 18:37:11.310840 2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="1.6s" Jun 25 18:37:11.387155 kubelet[2803]: W0625 18:37:11.386870 2803 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.387155 kubelet[2803]: E0625 18:37:11.386946 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:11.439622 kubelet[2803]: I0625 18:37:11.439011 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:11.439622 kubelet[2803]: E0625 18:37:11.439495 2803 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Jun 25 18:37:11.503386 containerd[1971]: time="2024-06-25T18:37:11.502919893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.503386 containerd[1971]: time="2024-06-25T18:37:11.502996826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.503386 containerd[1971]: time="2024-06-25T18:37:11.503025502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.503386 containerd[1971]: time="2024-06-25T18:37:11.503047567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.510366 containerd[1971]: time="2024-06-25T18:37:11.509752040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.510366 containerd[1971]: time="2024-06-25T18:37:11.509817799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.510366 containerd[1971]: time="2024-06-25T18:37:11.509839563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.510366 containerd[1971]: time="2024-06-25T18:37:11.509854980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.511727 containerd[1971]: time="2024-06-25T18:37:11.511484018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.511727 containerd[1971]: time="2024-06-25T18:37:11.511552143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.511727 containerd[1971]: time="2024-06-25T18:37:11.511583566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.516648 containerd[1971]: time="2024-06-25T18:37:11.516454374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.544958 systemd[1]: Started cri-containerd-9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd.scope - libcontainer container 9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd. Jun 25 18:37:11.570128 systemd[1]: Started cri-containerd-7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543.scope - libcontainer container 7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543. Jun 25 18:37:11.580017 systemd[1]: Started cri-containerd-5a980711b397956fbd438e2deafc84c553fc5d9f9aa5721de0f7a2d7e03341a7.scope - libcontainer container 5a980711b397956fbd438e2deafc84c553fc5d9f9aa5721de0f7a2d7e03341a7. Jun 25 18:37:11.754513 containerd[1971]: time="2024-06-25T18:37:11.753623331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-210,Uid:2e774276867cf009fffecbe50d531463,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a980711b397956fbd438e2deafc84c553fc5d9f9aa5721de0f7a2d7e03341a7\"" Jun 25 18:37:11.766737 containerd[1971]: time="2024-06-25T18:37:11.766635263Z" level=info msg="CreateContainer within sandbox \"5a980711b397956fbd438e2deafc84c553fc5d9f9aa5721de0f7a2d7e03341a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:37:11.769840 containerd[1971]: time="2024-06-25T18:37:11.769789582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-210,Uid:ebe5efd95d45ab7fd117ea14fe772f3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543\"" Jun 25 18:37:11.795507 containerd[1971]: time="2024-06-25T18:37:11.795445364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-210,Uid:932b9aa7db56fa09be8cfb17bb76cf9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd\"" Jun 25 18:37:11.808350 containerd[1971]: time="2024-06-25T18:37:11.807433592Z" level=info msg="CreateContainer within sandbox \"7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:37:11.808492 containerd[1971]: time="2024-06-25T18:37:11.808466928Z" level=info msg="CreateContainer within sandbox \"9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:37:11.862593 containerd[1971]: time="2024-06-25T18:37:11.862533383Z" level=info msg="CreateContainer within sandbox \"5a980711b397956fbd438e2deafc84c553fc5d9f9aa5721de0f7a2d7e03341a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49426a362541667d51141fa9b870da5392fbd5a6c7a5a49670d2936290a53d91\"" Jun 25 18:37:11.864234 containerd[1971]: time="2024-06-25T18:37:11.864197552Z" level=info msg="StartContainer for \"49426a362541667d51141fa9b870da5392fbd5a6c7a5a49670d2936290a53d91\"" Jun 25 18:37:11.866320 containerd[1971]: time="2024-06-25T18:37:11.866121859Z" level=info msg="CreateContainer within sandbox \"7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841\"" Jun 25 18:37:11.869742 containerd[1971]: time="2024-06-25T18:37:11.869131901Z" level=info msg="StartContainer for \"387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841\"" Jun 25 18:37:11.874489 containerd[1971]: time="2024-06-25T18:37:11.874438292Z" level=info msg="CreateContainer within sandbox \"9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9\"" Jun 25 18:37:11.875133 containerd[1971]: time="2024-06-25T18:37:11.875103206Z" level=info msg="StartContainer for \"61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9\"" Jun 25 18:37:11.948223 systemd[1]: Started cri-containerd-49426a362541667d51141fa9b870da5392fbd5a6c7a5a49670d2936290a53d91.scope - libcontainer container 49426a362541667d51141fa9b870da5392fbd5a6c7a5a49670d2936290a53d91. Jun 25 18:37:12.002692 kubelet[2803]: E0625 18:37:12.001979 2803 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.210:6443: connect: connection refused Jun 25 18:37:12.019976 systemd[1]: Started cri-containerd-387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841.scope - libcontainer container 387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841. Jun 25 18:37:12.024253 systemd[1]: run-containerd-runc-k8s.io-387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841-runc.oqd9Xv.mount: Deactivated successfully. Jun 25 18:37:12.056435 systemd[1]: run-containerd-runc-k8s.io-61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9-runc.1FRUFa.mount: Deactivated successfully. Jun 25 18:37:12.077522 systemd[1]: Started cri-containerd-61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9.scope - libcontainer container 61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9. Jun 25 18:37:12.209353 containerd[1971]: time="2024-06-25T18:37:12.207489551Z" level=info msg="StartContainer for \"49426a362541667d51141fa9b870da5392fbd5a6c7a5a49670d2936290a53d91\" returns successfully" Jun 25 18:37:12.274188 containerd[1971]: time="2024-06-25T18:37:12.272638557Z" level=info msg="StartContainer for \"387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841\" returns successfully" Jun 25 18:37:12.291931 containerd[1971]: time="2024-06-25T18:37:12.291601213Z" level=info msg="StartContainer for \"61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9\" returns successfully" Jun 25 18:37:12.912924 kubelet[2803]: E0625 18:37:12.912880 2803 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="3.2s" Jun 25 18:37:13.043345 kubelet[2803]: I0625 18:37:13.043310 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:13.044192 kubelet[2803]: E0625 18:37:13.043824 2803 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Jun 25 18:37:13.673761 update_engine[1950]: I0625 18:37:13.673704 1950 update_attempter.cc:509] Updating boot flags... Jun 25 18:37:13.831934 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3090) Jun 25 18:37:14.292826 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3093) Jun 25 18:37:14.681756 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3093) Jun 25 18:37:16.247246 kubelet[2803]: I0625 18:37:16.247217 2803 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:16.747886 kubelet[2803]: E0625 18:37:16.747833 2803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-210\" not found" node="ip-172-31-29-210" Jun 25 18:37:16.820094 kubelet[2803]: I0625 18:37:16.819867 2803 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-210" Jun 25 18:37:16.892682 kubelet[2803]: I0625 18:37:16.891751 2803 apiserver.go:52] "Watching apiserver" Jun 25 18:37:16.906719 kubelet[2803]: I0625 18:37:16.906642 2803 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:37:19.957687 systemd[1]: Reloading requested from client PID 3344 ('systemctl') (unit session-7.scope)... Jun 25 18:37:19.957747 systemd[1]: Reloading... Jun 25 18:37:20.190702 zram_generator::config[3383]: No configuration found. Jun 25 18:37:20.385061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:37:20.650147 systemd[1]: Reloading finished in 690 ms. Jun 25 18:37:20.729243 kubelet[2803]: I0625 18:37:20.729168 2803 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:37:20.729432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:20.744038 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:37:20.744366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:20.744514 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 109.5M memory peak, 0B memory swap peak. Jun 25 18:37:20.751130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:22.480186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:22.494205 (kubelet)[3440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:37:22.655260 kubelet[3440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:22.656369 kubelet[3440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:37:22.656439 kubelet[3440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:22.664391 kubelet[3440]: I0625 18:37:22.664337 3440 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:37:22.676286 kubelet[3440]: I0625 18:37:22.676250 3440 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:37:22.676581 kubelet[3440]: I0625 18:37:22.676564 3440 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:37:22.677016 kubelet[3440]: I0625 18:37:22.676998 3440 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:37:22.679552 kubelet[3440]: I0625 18:37:22.679528 3440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:37:22.690992 kubelet[3440]: I0625 18:37:22.690953 3440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:37:22.713322 kubelet[3440]: I0625 18:37:22.713287 3440 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:37:22.714825 kubelet[3440]: I0625 18:37:22.713788 3440 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:37:22.714825 kubelet[3440]: I0625 18:37:22.714488 3440 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:37:22.714825 kubelet[3440]: I0625 18:37:22.714522 3440 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:37:22.714825 kubelet[3440]: I0625 18:37:22.714539 3440 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:37:22.714825 kubelet[3440]: I0625 18:37:22.714590 3440 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:22.717675 kubelet[3440]: I0625 18:37:22.716203 3440 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:37:22.717675 kubelet[3440]: I0625 18:37:22.716256 3440 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:37:22.717675 kubelet[3440]: I0625 18:37:22.716296 3440 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:37:22.717675 kubelet[3440]: I0625 18:37:22.716423 3440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:37:22.739608 kubelet[3440]: I0625 18:37:22.738928 3440 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:37:22.741978 kubelet[3440]: I0625 18:37:22.740490 3440 server.go:1232] "Started kubelet" Jun 25 18:37:22.754348 kubelet[3440]: I0625 18:37:22.754318 3440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:37:22.761914 kubelet[3440]: I0625 18:37:22.761354 3440 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:37:22.764493 kubelet[3440]: I0625 18:37:22.764401 3440 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:37:22.768229 kubelet[3440]: I0625 18:37:22.767357 3440 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:37:22.772599 kubelet[3440]: I0625 18:37:22.772546 3440 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:37:22.775222 kubelet[3440]: I0625 18:37:22.774268 3440 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:37:22.779737 kubelet[3440]: E0625 18:37:22.777635 3440 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:37:22.779737 kubelet[3440]: E0625 18:37:22.777695 3440 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:37:22.779737 kubelet[3440]: I0625 18:37:22.778510 3440 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:37:22.785120 kubelet[3440]: I0625 18:37:22.784951 3440 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:37:22.840255 kubelet[3440]: I0625 18:37:22.838701 3440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:37:22.846868 kubelet[3440]: I0625 18:37:22.846165 3440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:37:22.846868 kubelet[3440]: I0625 18:37:22.846211 3440 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:37:22.846868 kubelet[3440]: I0625 18:37:22.846236 3440 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:37:22.846868 kubelet[3440]: E0625 18:37:22.846311 3440 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:37:22.880899 kubelet[3440]: I0625 18:37:22.880872 3440 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-210" Jun 25 18:37:22.932416 kubelet[3440]: I0625 18:37:22.932093 3440 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-29-210" Jun 25 18:37:22.932416 kubelet[3440]: I0625 18:37:22.932383 3440 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-210" Jun 25 18:37:22.949469 kubelet[3440]: E0625 18:37:22.949422 3440 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:37:22.978312 kubelet[3440]: I0625 18:37:22.978289 3440 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:37:22.978493 kubelet[3440]: I0625 18:37:22.978482 3440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:37:22.978727 kubelet[3440]: I0625 18:37:22.978622 3440 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:22.979704 kubelet[3440]: I0625 18:37:22.978978 3440 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:37:22.980062 kubelet[3440]: I0625 18:37:22.979874 3440 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:37:22.980062 kubelet[3440]: I0625 18:37:22.979941 3440 policy_none.go:49] "None policy: Start" Jun 25 18:37:22.981223 kubelet[3440]: I0625 18:37:22.981044 3440 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:37:22.981223 kubelet[3440]: I0625 18:37:22.981088 3440 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:37:22.981760 kubelet[3440]: I0625 18:37:22.981690 3440 state_mem.go:75] "Updated machine memory state" Jun 25 18:37:23.000788 kubelet[3440]: I0625 18:37:22.998283 3440 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:37:23.000788 kubelet[3440]: I0625 18:37:22.999056 3440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:37:23.150337 kubelet[3440]: I0625 18:37:23.150298 3440 topology_manager.go:215] "Topology Admit Handler" podUID="2e774276867cf009fffecbe50d531463" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-210" Jun 25 18:37:23.150507 kubelet[3440]: I0625 18:37:23.150463 3440 topology_manager.go:215] "Topology Admit Handler" podUID="932b9aa7db56fa09be8cfb17bb76cf9d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.150558 kubelet[3440]: I0625 18:37:23.150523 3440 topology_manager.go:215] "Topology Admit Handler" podUID="ebe5efd95d45ab7fd117ea14fe772f3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-210" Jun 25 18:37:23.164523 kubelet[3440]: E0625 18:37:23.164365 3440 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-210\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.180944 kubelet[3440]: I0625 18:37:23.180903 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:23.181428 kubelet[3440]: I0625 18:37:23.181375 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.181428 kubelet[3440]: I0625 18:37:23.181425 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.181565 kubelet[3440]: I0625 18:37:23.181472 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.181565 kubelet[3440]: I0625 18:37:23.181505 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebe5efd95d45ab7fd117ea14fe772f3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-210\" (UID: \"ebe5efd95d45ab7fd117ea14fe772f3e\") " pod="kube-system/kube-scheduler-ip-172-31-29-210" Jun 25 18:37:23.181565 kubelet[3440]: I0625 18:37:23.181563 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-ca-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:23.181767 kubelet[3440]: I0625 18:37:23.181636 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e774276867cf009fffecbe50d531463-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"2e774276867cf009fffecbe50d531463\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Jun 25 18:37:23.181767 kubelet[3440]: I0625 18:37:23.181693 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.181767 kubelet[3440]: I0625 18:37:23.181729 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/932b9aa7db56fa09be8cfb17bb76cf9d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"932b9aa7db56fa09be8cfb17bb76cf9d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Jun 25 18:37:23.720225 kubelet[3440]: I0625 18:37:23.719772 3440 apiserver.go:52] "Watching apiserver" Jun 25 18:37:23.774963 kubelet[3440]: I0625 18:37:23.774811 3440 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:37:24.015170 kubelet[3440]: I0625 18:37:24.014992 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-210" podStartSLOduration=1.014737142 podCreationTimestamp="2024-06-25 18:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:24.011328609 +0000 UTC m=+1.506432367" watchObservedRunningTime="2024-06-25 18:37:24.014737142 +0000 UTC m=+1.509840901" Jun 25 18:37:24.069787 kubelet[3440]: I0625 18:37:24.069750 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-210" podStartSLOduration=1.069573946 podCreationTimestamp="2024-06-25 18:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:24.048164217 +0000 UTC m=+1.543267977" watchObservedRunningTime="2024-06-25 18:37:24.069573946 +0000 UTC m=+1.564677694" Jun 25 18:37:27.900556 kubelet[3440]: I0625 18:37:27.899690 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-210" podStartSLOduration=7.899613536 podCreationTimestamp="2024-06-25 18:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:24.071557434 +0000 UTC m=+1.566661194" watchObservedRunningTime="2024-06-25 18:37:27.899613536 +0000 UTC m=+5.394717296" Jun 25 18:37:28.949326 sudo[2292]: pam_unix(sudo:session): session closed for user root Jun 25 18:37:28.972742 sshd[2288]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:28.976588 systemd[1]: sshd@6-172.31.29.210:22-139.178.68.195:40844.service: Deactivated successfully. Jun 25 18:37:28.979099 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:37:28.979952 systemd[1]: session-7.scope: Consumed 5.482s CPU time, 133.2M memory peak, 0B memory swap peak. Jun 25 18:37:28.983078 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:37:28.985047 systemd-logind[1946]: Removed session 7. Jun 25 18:37:32.933318 kubelet[3440]: I0625 18:37:32.933129 3440 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:37:32.953463 containerd[1971]: time="2024-06-25T18:37:32.953412627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:37:32.954422 kubelet[3440]: I0625 18:37:32.954392 3440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:37:33.060704 kubelet[3440]: I0625 18:37:33.060245 3440 topology_manager.go:215] "Topology Admit Handler" podUID="ee70a5b4-445d-4772-abe1-5e35c2005d70" podNamespace="kube-system" podName="kube-proxy-fwvqq" Jun 25 18:37:33.118310 systemd[1]: Created slice kubepods-besteffort-podee70a5b4_445d_4772_abe1_5e35c2005d70.slice - libcontainer container kubepods-besteffort-podee70a5b4_445d_4772_abe1_5e35c2005d70.slice. Jun 25 18:37:33.208001 kubelet[3440]: I0625 18:37:33.207484 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee70a5b4-445d-4772-abe1-5e35c2005d70-lib-modules\") pod \"kube-proxy-fwvqq\" (UID: \"ee70a5b4-445d-4772-abe1-5e35c2005d70\") " pod="kube-system/kube-proxy-fwvqq" Jun 25 18:37:33.208001 kubelet[3440]: I0625 18:37:33.207538 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee70a5b4-445d-4772-abe1-5e35c2005d70-kube-proxy\") pod \"kube-proxy-fwvqq\" (UID: \"ee70a5b4-445d-4772-abe1-5e35c2005d70\") " pod="kube-system/kube-proxy-fwvqq" Jun 25 18:37:33.208001 kubelet[3440]: I0625 18:37:33.207576 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxdm\" (UniqueName: \"kubernetes.io/projected/ee70a5b4-445d-4772-abe1-5e35c2005d70-kube-api-access-bsxdm\") pod \"kube-proxy-fwvqq\" (UID: \"ee70a5b4-445d-4772-abe1-5e35c2005d70\") " pod="kube-system/kube-proxy-fwvqq" Jun 25 18:37:33.208001 kubelet[3440]: I0625 18:37:33.207607 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee70a5b4-445d-4772-abe1-5e35c2005d70-xtables-lock\") pod \"kube-proxy-fwvqq\" (UID: \"ee70a5b4-445d-4772-abe1-5e35c2005d70\") " pod="kube-system/kube-proxy-fwvqq" Jun 25 18:37:33.237491 kubelet[3440]: I0625 18:37:33.235394 3440 topology_manager.go:215] "Topology Admit Handler" podUID="da55dd67-66f6-4c15-a77b-1081327f7b5b" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-zd4tg" Jun 25 18:37:33.249210 systemd[1]: Created slice kubepods-besteffort-podda55dd67_66f6_4c15_a77b_1081327f7b5b.slice - libcontainer container kubepods-besteffort-podda55dd67_66f6_4c15_a77b_1081327f7b5b.slice. Jun 25 18:37:33.308929 kubelet[3440]: I0625 18:37:33.308895 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgsp\" (UniqueName: \"kubernetes.io/projected/da55dd67-66f6-4c15-a77b-1081327f7b5b-kube-api-access-9xgsp\") pod \"tigera-operator-76c4974c85-zd4tg\" (UID: \"da55dd67-66f6-4c15-a77b-1081327f7b5b\") " pod="tigera-operator/tigera-operator-76c4974c85-zd4tg" Jun 25 18:37:33.311326 kubelet[3440]: I0625 18:37:33.310968 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da55dd67-66f6-4c15-a77b-1081327f7b5b-var-lib-calico\") pod \"tigera-operator-76c4974c85-zd4tg\" (UID: \"da55dd67-66f6-4c15-a77b-1081327f7b5b\") " pod="tigera-operator/tigera-operator-76c4974c85-zd4tg" Jun 25 18:37:33.436797 containerd[1971]: time="2024-06-25T18:37:33.436734654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fwvqq,Uid:ee70a5b4-445d-4772-abe1-5e35c2005d70,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:33.479777 containerd[1971]: time="2024-06-25T18:37:33.479554580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:33.480731 containerd[1971]: time="2024-06-25T18:37:33.479676821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:33.480731 containerd[1971]: time="2024-06-25T18:37:33.480614292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:33.480731 containerd[1971]: time="2024-06-25T18:37:33.480638960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:33.517899 systemd[1]: Started cri-containerd-790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96.scope - libcontainer container 790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96. Jun 25 18:37:33.552402 containerd[1971]: time="2024-06-25T18:37:33.551695601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fwvqq,Uid:ee70a5b4-445d-4772-abe1-5e35c2005d70,Namespace:kube-system,Attempt:0,} returns sandbox id \"790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96\"" Jun 25 18:37:33.556216 containerd[1971]: time="2024-06-25T18:37:33.555852511Z" level=info msg="CreateContainer within sandbox \"790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:37:33.558087 containerd[1971]: time="2024-06-25T18:37:33.558027078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zd4tg,Uid:da55dd67-66f6-4c15-a77b-1081327f7b5b,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:37:33.662864 containerd[1971]: time="2024-06-25T18:37:33.659984934Z" level=info msg="CreateContainer within sandbox \"790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5005dd5086b26e7a3c5edbce8f1d3c779dbed3f0d6db780ebbf060d00065d4c1\"" Jun 25 18:37:33.662864 containerd[1971]: time="2024-06-25T18:37:33.661893812Z" level=info msg="StartContainer for \"5005dd5086b26e7a3c5edbce8f1d3c779dbed3f0d6db780ebbf060d00065d4c1\"" Jun 25 18:37:33.680114 containerd[1971]: time="2024-06-25T18:37:33.679958584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:33.680401 containerd[1971]: time="2024-06-25T18:37:33.680350824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:33.680705 containerd[1971]: time="2024-06-25T18:37:33.680667364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:33.680946 containerd[1971]: time="2024-06-25T18:37:33.680915677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:33.730948 systemd[1]: Started cri-containerd-820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b.scope - libcontainer container 820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b. Jun 25 18:37:33.741069 systemd[1]: Started cri-containerd-5005dd5086b26e7a3c5edbce8f1d3c779dbed3f0d6db780ebbf060d00065d4c1.scope - libcontainer container 5005dd5086b26e7a3c5edbce8f1d3c779dbed3f0d6db780ebbf060d00065d4c1. Jun 25 18:37:33.797340 containerd[1971]: time="2024-06-25T18:37:33.797114025Z" level=info msg="StartContainer for \"5005dd5086b26e7a3c5edbce8f1d3c779dbed3f0d6db780ebbf060d00065d4c1\" returns successfully" Jun 25 18:37:33.818924 containerd[1971]: time="2024-06-25T18:37:33.818623719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zd4tg,Uid:da55dd67-66f6-4c15-a77b-1081327f7b5b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b\"" Jun 25 18:37:33.822836 containerd[1971]: time="2024-06-25T18:37:33.822730693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:37:33.969683 kubelet[3440]: I0625 18:37:33.969533 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fwvqq" podStartSLOduration=0.969484783 podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:33.96815662 +0000 UTC m=+11.463260378" watchObservedRunningTime="2024-06-25 18:37:33.969484783 +0000 UTC m=+11.464588546" Jun 25 18:37:34.356298 systemd[1]: run-containerd-runc-k8s.io-790d1933b78bcdfb332fd19cab026cd79ac02e21acd57365c307f0838162fb96-runc.LNAd8P.mount: Deactivated successfully. Jun 25 18:37:35.415887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170805254.mount: Deactivated successfully. Jun 25 18:37:36.597459 containerd[1971]: time="2024-06-25T18:37:36.597398804Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:36.601610 containerd[1971]: time="2024-06-25T18:37:36.600826975Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076108" Jun 25 18:37:36.603169 containerd[1971]: time="2024-06-25T18:37:36.602119600Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:36.609731 containerd[1971]: time="2024-06-25T18:37:36.609683861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:36.611088 containerd[1971]: time="2024-06-25T18:37:36.610878044Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.788080627s" Jun 25 18:37:36.611088 containerd[1971]: time="2024-06-25T18:37:36.610924671Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:37:36.613776 containerd[1971]: time="2024-06-25T18:37:36.613702318Z" level=info msg="CreateContainer within sandbox \"820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:37:36.681937 containerd[1971]: time="2024-06-25T18:37:36.681882995Z" level=info msg="CreateContainer within sandbox \"820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc\"" Jun 25 18:37:36.682623 containerd[1971]: time="2024-06-25T18:37:36.682578202Z" level=info msg="StartContainer for \"695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc\"" Jun 25 18:37:36.759563 systemd[1]: Started cri-containerd-695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc.scope - libcontainer container 695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc. Jun 25 18:37:36.802329 containerd[1971]: time="2024-06-25T18:37:36.802285279Z" level=info msg="StartContainer for \"695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc\" returns successfully" Jun 25 18:37:40.337535 kubelet[3440]: I0625 18:37:40.337485 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-zd4tg" podStartSLOduration=4.546225123 podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="2024-06-25 18:37:33.820461959 +0000 UTC m=+11.315565697" lastFinishedPulling="2024-06-25 18:37:36.611670204 +0000 UTC m=+14.106773953" observedRunningTime="2024-06-25 18:37:37.035921334 +0000 UTC m=+14.531025096" watchObservedRunningTime="2024-06-25 18:37:40.337433379 +0000 UTC m=+17.832537138" Jun 25 18:37:40.338251 kubelet[3440]: I0625 18:37:40.337638 3440 topology_manager.go:215] "Topology Admit Handler" podUID="ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7" podNamespace="calico-system" podName="calico-typha-778cc54b77-xcb64" Jun 25 18:37:40.354897 systemd[1]: Created slice kubepods-besteffort-podab2271cd_e14d_4a3b_95a4_f5e7b39f85d7.slice - libcontainer container kubepods-besteffort-podab2271cd_e14d_4a3b_95a4_f5e7b39f85d7.slice. Jun 25 18:37:40.477106 kubelet[3440]: I0625 18:37:40.477033 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7-typha-certs\") pod \"calico-typha-778cc54b77-xcb64\" (UID: \"ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7\") " pod="calico-system/calico-typha-778cc54b77-xcb64" Jun 25 18:37:40.477370 kubelet[3440]: I0625 18:37:40.477135 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7-tigera-ca-bundle\") pod \"calico-typha-778cc54b77-xcb64\" (UID: \"ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7\") " pod="calico-system/calico-typha-778cc54b77-xcb64" Jun 25 18:37:40.477370 kubelet[3440]: I0625 18:37:40.477255 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-424k9\" (UniqueName: \"kubernetes.io/projected/ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7-kube-api-access-424k9\") pod \"calico-typha-778cc54b77-xcb64\" (UID: \"ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7\") " pod="calico-system/calico-typha-778cc54b77-xcb64" Jun 25 18:37:40.502115 kubelet[3440]: I0625 18:37:40.502076 3440 topology_manager.go:215] "Topology Admit Handler" podUID="fdf33670-49fd-48a0-b339-b1759363154b" podNamespace="calico-system" podName="calico-node-zkpvl" Jun 25 18:37:40.514033 systemd[1]: Created slice kubepods-besteffort-podfdf33670_49fd_48a0_b339_b1759363154b.slice - libcontainer container kubepods-besteffort-podfdf33670_49fd_48a0_b339_b1759363154b.slice. Jun 25 18:37:40.580808 kubelet[3440]: I0625 18:37:40.578492 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-cni-log-dir\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.580808 kubelet[3440]: I0625 18:37:40.578606 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-cni-bin-dir\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.580808 kubelet[3440]: I0625 18:37:40.578645 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-var-lib-calico\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.580808 kubelet[3440]: I0625 18:37:40.578696 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-cni-net-dir\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.580808 kubelet[3440]: I0625 18:37:40.578761 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-lib-modules\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582610 kubelet[3440]: I0625 18:37:40.578788 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-xtables-lock\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582610 kubelet[3440]: I0625 18:37:40.578816 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-policysync\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582610 kubelet[3440]: I0625 18:37:40.578854 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-flexvol-driver-host\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582610 kubelet[3440]: I0625 18:37:40.578885 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fdf33670-49fd-48a0-b339-b1759363154b-var-run-calico\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582610 kubelet[3440]: I0625 18:37:40.578973 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fdf33670-49fd-48a0-b339-b1759363154b-node-certs\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582817 kubelet[3440]: I0625 18:37:40.579029 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf33670-49fd-48a0-b339-b1759363154b-tigera-ca-bundle\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.582817 kubelet[3440]: I0625 18:37:40.579062 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptnp\" (UniqueName: \"kubernetes.io/projected/fdf33670-49fd-48a0-b339-b1759363154b-kube-api-access-9ptnp\") pod \"calico-node-zkpvl\" (UID: \"fdf33670-49fd-48a0-b339-b1759363154b\") " pod="calico-system/calico-node-zkpvl" Jun 25 18:37:40.662611 containerd[1971]: time="2024-06-25T18:37:40.661991429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778cc54b77-xcb64,Uid:ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:40.706269 kubelet[3440]: E0625 18:37:40.706234 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.706269 kubelet[3440]: W0625 18:37:40.706264 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.706646 kubelet[3440]: E0625 18:37:40.706295 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.744523 kubelet[3440]: E0625 18:37:40.744491 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.744523 kubelet[3440]: W0625 18:37:40.744523 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.744761 kubelet[3440]: E0625 18:37:40.744556 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.755673 kubelet[3440]: I0625 18:37:40.755619 3440 topology_manager.go:215] "Topology Admit Handler" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" podNamespace="calico-system" podName="csi-node-driver-4cm5c" Jun 25 18:37:40.760178 kubelet[3440]: E0625 18:37:40.760106 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:40.762096 containerd[1971]: time="2024-06-25T18:37:40.761096692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:40.762096 containerd[1971]: time="2024-06-25T18:37:40.761358366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:40.762096 containerd[1971]: time="2024-06-25T18:37:40.761431519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:40.762096 containerd[1971]: time="2024-06-25T18:37:40.761513519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:40.798904 systemd[1]: Started cri-containerd-36578935a00ddfc50dbb241713cb3a0354ce3136b700b6e46624fdaf31e3614c.scope - libcontainer container 36578935a00ddfc50dbb241713cb3a0354ce3136b700b6e46624fdaf31e3614c. Jun 25 18:37:40.840334 kubelet[3440]: E0625 18:37:40.840306 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.840706 kubelet[3440]: W0625 18:37:40.840516 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.840706 kubelet[3440]: E0625 18:37:40.840549 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.840972 containerd[1971]: time="2024-06-25T18:37:40.840898047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkpvl,Uid:fdf33670-49fd-48a0-b339-b1759363154b,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:40.841682 kubelet[3440]: E0625 18:37:40.841309 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.841682 kubelet[3440]: W0625 18:37:40.841324 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.841682 kubelet[3440]: E0625 18:37:40.841365 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.847126 kubelet[3440]: E0625 18:37:40.846776 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.847126 kubelet[3440]: W0625 18:37:40.846806 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.847562 kubelet[3440]: E0625 18:37:40.847278 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.853221 kubelet[3440]: E0625 18:37:40.853043 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.853221 kubelet[3440]: W0625 18:37:40.853081 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.853221 kubelet[3440]: E0625 18:37:40.853109 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.854052 kubelet[3440]: E0625 18:37:40.853854 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.854052 kubelet[3440]: W0625 18:37:40.853870 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.854052 kubelet[3440]: E0625 18:37:40.853908 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.858598 kubelet[3440]: E0625 18:37:40.858394 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.858598 kubelet[3440]: W0625 18:37:40.858422 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.858598 kubelet[3440]: E0625 18:37:40.858450 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.859036 kubelet[3440]: E0625 18:37:40.859021 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.859448 kubelet[3440]: W0625 18:37:40.859387 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.859448 kubelet[3440]: E0625 18:37:40.859418 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.862044 kubelet[3440]: E0625 18:37:40.861845 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.862044 kubelet[3440]: W0625 18:37:40.861864 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.862044 kubelet[3440]: E0625 18:37:40.861889 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.863855 kubelet[3440]: E0625 18:37:40.863437 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.863855 kubelet[3440]: W0625 18:37:40.863458 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.863855 kubelet[3440]: E0625 18:37:40.863481 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.864512 kubelet[3440]: E0625 18:37:40.864340 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.864512 kubelet[3440]: W0625 18:37:40.864354 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.864512 kubelet[3440]: E0625 18:37:40.864385 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.868350 kubelet[3440]: E0625 18:37:40.867895 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.868350 kubelet[3440]: W0625 18:37:40.867916 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.868350 kubelet[3440]: E0625 18:37:40.868098 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.869228 kubelet[3440]: E0625 18:37:40.869183 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.869228 kubelet[3440]: W0625 18:37:40.869215 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.869413 kubelet[3440]: E0625 18:37:40.869238 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.870537 kubelet[3440]: E0625 18:37:40.870421 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.870537 kubelet[3440]: W0625 18:37:40.870438 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.870537 kubelet[3440]: E0625 18:37:40.870456 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.872122 kubelet[3440]: E0625 18:37:40.870681 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.872122 kubelet[3440]: W0625 18:37:40.870698 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.872122 kubelet[3440]: E0625 18:37:40.870715 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.872122 kubelet[3440]: E0625 18:37:40.870929 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.872122 kubelet[3440]: W0625 18:37:40.870940 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.872122 kubelet[3440]: E0625 18:37:40.870956 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.874241 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.875930 kubelet[3440]: W0625 18:37:40.874262 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.874287 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.874785 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.875930 kubelet[3440]: W0625 18:37:40.874797 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.874815 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.875012 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.875930 kubelet[3440]: W0625 18:37:40.875022 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.875039 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.875930 kubelet[3440]: E0625 18:37:40.875393 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.877567 kubelet[3440]: W0625 18:37:40.875405 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.877567 kubelet[3440]: E0625 18:37:40.875422 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.877567 kubelet[3440]: E0625 18:37:40.875624 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.877567 kubelet[3440]: W0625 18:37:40.875634 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.877567 kubelet[3440]: E0625 18:37:40.875650 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.882372 kubelet[3440]: E0625 18:37:40.882220 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.882372 kubelet[3440]: W0625 18:37:40.882245 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.882372 kubelet[3440]: E0625 18:37:40.882272 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.882372 kubelet[3440]: I0625 18:37:40.882312 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53634da2-c3fe-455c-8218-c4b393d92a3f-kubelet-dir\") pod \"csi-node-driver-4cm5c\" (UID: \"53634da2-c3fe-455c-8218-c4b393d92a3f\") " pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:40.882650 kubelet[3440]: E0625 18:37:40.882610 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.882650 kubelet[3440]: W0625 18:37:40.882624 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.883607 kubelet[3440]: E0625 18:37:40.882859 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.883607 kubelet[3440]: I0625 18:37:40.883005 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsck\" (UniqueName: \"kubernetes.io/projected/53634da2-c3fe-455c-8218-c4b393d92a3f-kube-api-access-fvsck\") pod \"csi-node-driver-4cm5c\" (UID: \"53634da2-c3fe-455c-8218-c4b393d92a3f\") " pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:40.883607 kubelet[3440]: E0625 18:37:40.883176 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.883607 kubelet[3440]: W0625 18:37:40.883188 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.883607 kubelet[3440]: E0625 18:37:40.883223 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.883607 kubelet[3440]: E0625 18:37:40.883540 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.883607 kubelet[3440]: W0625 18:37:40.883550 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.883607 kubelet[3440]: E0625 18:37:40.883584 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.884853 kubelet[3440]: E0625 18:37:40.883915 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.884853 kubelet[3440]: W0625 18:37:40.883926 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.884853 kubelet[3440]: E0625 18:37:40.883995 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.884853 kubelet[3440]: I0625 18:37:40.884133 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53634da2-c3fe-455c-8218-c4b393d92a3f-registration-dir\") pod \"csi-node-driver-4cm5c\" (UID: \"53634da2-c3fe-455c-8218-c4b393d92a3f\") " pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:40.884853 kubelet[3440]: E0625 18:37:40.884788 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.884853 kubelet[3440]: W0625 18:37:40.884801 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.884853 kubelet[3440]: E0625 18:37:40.884827 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.885217 kubelet[3440]: E0625 18:37:40.885195 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.885266 kubelet[3440]: W0625 18:37:40.885228 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.885266 kubelet[3440]: E0625 18:37:40.885257 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.886444 kubelet[3440]: E0625 18:37:40.885600 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.886444 kubelet[3440]: W0625 18:37:40.885613 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.886444 kubelet[3440]: E0625 18:37:40.885645 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.886444 kubelet[3440]: I0625 18:37:40.885857 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53634da2-c3fe-455c-8218-c4b393d92a3f-socket-dir\") pod \"csi-node-driver-4cm5c\" (UID: \"53634da2-c3fe-455c-8218-c4b393d92a3f\") " pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:40.886444 kubelet[3440]: E0625 18:37:40.886079 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.886444 kubelet[3440]: W0625 18:37:40.886111 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.886444 kubelet[3440]: E0625 18:37:40.886132 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.887752 kubelet[3440]: E0625 18:37:40.886527 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.887752 kubelet[3440]: W0625 18:37:40.886540 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.887752 kubelet[3440]: E0625 18:37:40.886575 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.887752 kubelet[3440]: E0625 18:37:40.886954 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.887752 kubelet[3440]: W0625 18:37:40.886965 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.887752 kubelet[3440]: E0625 18:37:40.887016 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.887752 kubelet[3440]: I0625 18:37:40.887236 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/53634da2-c3fe-455c-8218-c4b393d92a3f-varrun\") pod \"csi-node-driver-4cm5c\" (UID: \"53634da2-c3fe-455c-8218-c4b393d92a3f\") " pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:40.887752 kubelet[3440]: E0625 18:37:40.887553 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.887752 kubelet[3440]: W0625 18:37:40.887565 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.888162 kubelet[3440]: E0625 18:37:40.887597 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.888162 kubelet[3440]: E0625 18:37:40.887972 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.888162 kubelet[3440]: W0625 18:37:40.888008 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.888162 kubelet[3440]: E0625 18:37:40.888037 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.888402 kubelet[3440]: E0625 18:37:40.888377 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.888402 kubelet[3440]: W0625 18:37:40.888388 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.888774 kubelet[3440]: E0625 18:37:40.888405 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.889682 kubelet[3440]: E0625 18:37:40.889070 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.889682 kubelet[3440]: W0625 18:37:40.889084 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.889682 kubelet[3440]: E0625 18:37:40.889100 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.991521 kubelet[3440]: E0625 18:37:40.991319 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.991521 kubelet[3440]: W0625 18:37:40.991400 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.991521 kubelet[3440]: E0625 18:37:40.991434 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.996546 kubelet[3440]: E0625 18:37:40.996241 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.996546 kubelet[3440]: W0625 18:37:40.996266 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.997922 kubelet[3440]: E0625 18:37:40.997500 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.998113 kubelet[3440]: E0625 18:37:40.998099 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:40.998222 kubelet[3440]: W0625 18:37:40.998210 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:40.998397 kubelet[3440]: E0625 18:37:40.998332 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:40.999216 kubelet[3440]: E0625 18:37:40.999201 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.002744 kubelet[3440]: W0625 18:37:41.002312 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.002744 kubelet[3440]: E0625 18:37:41.002367 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.004047 kubelet[3440]: E0625 18:37:41.004029 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.004354 kubelet[3440]: W0625 18:37:41.004301 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.005106 kubelet[3440]: E0625 18:37:41.004977 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.008742 kubelet[3440]: E0625 18:37:41.007979 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.008742 kubelet[3440]: W0625 18:37:41.008066 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.010752 kubelet[3440]: E0625 18:37:41.010733 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.014809 kubelet[3440]: E0625 18:37:41.014504 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.014809 kubelet[3440]: W0625 18:37:41.014526 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.014809 kubelet[3440]: E0625 18:37:41.014670 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.015469 kubelet[3440]: E0625 18:37:41.015158 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.015469 kubelet[3440]: W0625 18:37:41.015184 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.015822 kubelet[3440]: E0625 18:37:41.015706 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.015951 kubelet[3440]: E0625 18:37:41.015868 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.015951 kubelet[3440]: W0625 18:37:41.015878 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.016675 kubelet[3440]: E0625 18:37:41.016332 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.018482 kubelet[3440]: E0625 18:37:41.017920 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.018482 kubelet[3440]: W0625 18:37:41.017938 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.018814 kubelet[3440]: E0625 18:37:41.018616 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.020759 kubelet[3440]: E0625 18:37:41.020278 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.020759 kubelet[3440]: W0625 18:37:41.020359 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.020759 kubelet[3440]: E0625 18:37:41.020441 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.022409 kubelet[3440]: E0625 18:37:41.021770 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.022409 kubelet[3440]: W0625 18:37:41.021784 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.022409 kubelet[3440]: E0625 18:37:41.021852 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.022409 kubelet[3440]: E0625 18:37:41.022255 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.022409 kubelet[3440]: W0625 18:37:41.022266 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.024032 kubelet[3440]: E0625 18:37:41.023013 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.024032 kubelet[3440]: E0625 18:37:41.023200 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.024032 kubelet[3440]: W0625 18:37:41.023210 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.024032 kubelet[3440]: E0625 18:37:41.024004 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.026472 containerd[1971]: time="2024-06-25T18:37:41.026016865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:41.026472 containerd[1971]: time="2024-06-25T18:37:41.026096392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:41.026472 containerd[1971]: time="2024-06-25T18:37:41.026141412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:41.026909 containerd[1971]: time="2024-06-25T18:37:41.026393509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:41.027786 kubelet[3440]: E0625 18:37:41.027765 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.027921 kubelet[3440]: W0625 18:37:41.027788 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.027921 kubelet[3440]: E0625 18:37:41.027830 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.028299 kubelet[3440]: E0625 18:37:41.028219 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.028299 kubelet[3440]: W0625 18:37:41.028231 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.028714 kubelet[3440]: E0625 18:37:41.028694 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.030848 kubelet[3440]: E0625 18:37:41.030828 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.030848 kubelet[3440]: W0625 18:37:41.030849 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.031100 kubelet[3440]: E0625 18:37:41.030970 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.032890 kubelet[3440]: E0625 18:37:41.032868 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.032890 kubelet[3440]: W0625 18:37:41.032890 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.033318 kubelet[3440]: E0625 18:37:41.033111 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.034797 kubelet[3440]: E0625 18:37:41.034778 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.034797 kubelet[3440]: W0625 18:37:41.034797 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.035045 kubelet[3440]: E0625 18:37:41.035017 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.035135 kubelet[3440]: E0625 18:37:41.035125 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.035214 kubelet[3440]: W0625 18:37:41.035135 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.037794 kubelet[3440]: E0625 18:37:41.037772 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.038983 kubelet[3440]: E0625 18:37:41.038965 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.038983 kubelet[3440]: W0625 18:37:41.038983 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.039318 kubelet[3440]: E0625 18:37:41.039140 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.039476 kubelet[3440]: E0625 18:37:41.039453 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.039476 kubelet[3440]: W0625 18:37:41.039464 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.039626 kubelet[3440]: E0625 18:37:41.039585 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.040163 kubelet[3440]: E0625 18:37:41.040145 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.040163 kubelet[3440]: W0625 18:37:41.040162 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.040341 kubelet[3440]: E0625 18:37:41.040288 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.040756 kubelet[3440]: E0625 18:37:41.040737 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.041708 kubelet[3440]: W0625 18:37:41.041686 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.041788 kubelet[3440]: E0625 18:37:41.041723 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.042117 kubelet[3440]: E0625 18:37:41.042101 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.042117 kubelet[3440]: W0625 18:37:41.042117 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.042212 kubelet[3440]: E0625 18:37:41.042135 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.071899 kubelet[3440]: E0625 18:37:41.071843 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:41.071899 kubelet[3440]: W0625 18:37:41.071869 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:41.071899 kubelet[3440]: E0625 18:37:41.071898 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:41.083377 systemd[1]: Started cri-containerd-3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4.scope - libcontainer container 3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4. Jun 25 18:37:41.146434 containerd[1971]: time="2024-06-25T18:37:41.146239612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-778cc54b77-xcb64,Uid:ab2271cd-e14d-4a3b-95a4-f5e7b39f85d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"36578935a00ddfc50dbb241713cb3a0354ce3136b700b6e46624fdaf31e3614c\"" Jun 25 18:37:41.155524 containerd[1971]: time="2024-06-25T18:37:41.155487571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:37:41.189136 containerd[1971]: time="2024-06-25T18:37:41.188892606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkpvl,Uid:fdf33670-49fd-48a0-b339-b1759363154b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\"" Jun 25 18:37:42.875314 kubelet[3440]: E0625 18:37:42.875086 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:44.414932 containerd[1971]: time="2024-06-25T18:37:44.414869992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:44.417353 containerd[1971]: time="2024-06-25T18:37:44.416859012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:37:44.423986 containerd[1971]: time="2024-06-25T18:37:44.422148018Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:44.452834 containerd[1971]: time="2024-06-25T18:37:44.452767729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:44.456144 containerd[1971]: time="2024-06-25T18:37:44.456070071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.3004046s" Jun 25 18:37:44.456397 containerd[1971]: time="2024-06-25T18:37:44.456156159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:37:44.458989 containerd[1971]: time="2024-06-25T18:37:44.458952741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:37:44.529444 containerd[1971]: time="2024-06-25T18:37:44.529392068Z" level=info msg="CreateContainer within sandbox \"36578935a00ddfc50dbb241713cb3a0354ce3136b700b6e46624fdaf31e3614c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:37:44.571679 containerd[1971]: time="2024-06-25T18:37:44.571287248Z" level=info msg="CreateContainer within sandbox \"36578935a00ddfc50dbb241713cb3a0354ce3136b700b6e46624fdaf31e3614c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"55c9017e9323008930b2e8c63f1b8751d800ac2b089a5809fcd19fcd1ebaef5b\"" Jun 25 18:37:44.576523 containerd[1971]: time="2024-06-25T18:37:44.573317273Z" level=info msg="StartContainer for \"55c9017e9323008930b2e8c63f1b8751d800ac2b089a5809fcd19fcd1ebaef5b\"" Jun 25 18:37:44.694980 systemd[1]: Started cri-containerd-55c9017e9323008930b2e8c63f1b8751d800ac2b089a5809fcd19fcd1ebaef5b.scope - libcontainer container 55c9017e9323008930b2e8c63f1b8751d800ac2b089a5809fcd19fcd1ebaef5b. Jun 25 18:37:44.810099 containerd[1971]: time="2024-06-25T18:37:44.810052015Z" level=info msg="StartContainer for \"55c9017e9323008930b2e8c63f1b8751d800ac2b089a5809fcd19fcd1ebaef5b\" returns successfully" Jun 25 18:37:44.849513 kubelet[3440]: E0625 18:37:44.849009 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:45.118068 kubelet[3440]: E0625 18:37:45.118029 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.118498 kubelet[3440]: W0625 18:37:45.118365 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.118498 kubelet[3440]: E0625 18:37:45.118407 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.118951 kubelet[3440]: E0625 18:37:45.118932 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.118951 kubelet[3440]: W0625 18:37:45.118950 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.119184 kubelet[3440]: E0625 18:37:45.118973 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.119752 kubelet[3440]: E0625 18:37:45.119674 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.119752 kubelet[3440]: W0625 18:37:45.119689 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.119752 kubelet[3440]: E0625 18:37:45.119708 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.120007 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.121699 kubelet[3440]: W0625 18:37:45.120021 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.120039 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.120394 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.121699 kubelet[3440]: W0625 18:37:45.120404 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.120581 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.121027 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.121699 kubelet[3440]: W0625 18:37:45.121073 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.121699 kubelet[3440]: E0625 18:37:45.121093 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.122774 kubelet[3440]: E0625 18:37:45.122186 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.122774 kubelet[3440]: W0625 18:37:45.122202 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.122774 kubelet[3440]: E0625 18:37:45.122220 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.122774 kubelet[3440]: E0625 18:37:45.122539 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.122774 kubelet[3440]: W0625 18:37:45.122550 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.122774 kubelet[3440]: E0625 18:37:45.122569 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.123304 kubelet[3440]: E0625 18:37:45.123016 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.123304 kubelet[3440]: W0625 18:37:45.123028 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.123304 kubelet[3440]: E0625 18:37:45.123046 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.123905 kubelet[3440]: E0625 18:37:45.123894 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.124126 kubelet[3440]: W0625 18:37:45.123905 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.124126 kubelet[3440]: E0625 18:37:45.123923 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.124424 kubelet[3440]: E0625 18:37:45.124402 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.124507 kubelet[3440]: W0625 18:37:45.124432 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.124507 kubelet[3440]: E0625 18:37:45.124451 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.124958 kubelet[3440]: E0625 18:37:45.124942 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.125033 kubelet[3440]: W0625 18:37:45.124959 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.125033 kubelet[3440]: E0625 18:37:45.124977 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.125395 kubelet[3440]: E0625 18:37:45.125379 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.125395 kubelet[3440]: W0625 18:37:45.125396 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.125650 kubelet[3440]: E0625 18:37:45.125414 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.126024 kubelet[3440]: E0625 18:37:45.125856 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.126024 kubelet[3440]: W0625 18:37:45.125872 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.126024 kubelet[3440]: E0625 18:37:45.125890 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.126738 kubelet[3440]: E0625 18:37:45.126344 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.126738 kubelet[3440]: W0625 18:37:45.126360 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.126738 kubelet[3440]: E0625 18:37:45.126376 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.168318 kubelet[3440]: E0625 18:37:45.168232 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.168318 kubelet[3440]: W0625 18:37:45.168258 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.168318 kubelet[3440]: E0625 18:37:45.168285 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.168938 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.170041 kubelet[3440]: W0625 18:37:45.168973 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.169017 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.169377 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.170041 kubelet[3440]: W0625 18:37:45.169391 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.169428 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.169783 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.170041 kubelet[3440]: W0625 18:37:45.169795 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.170041 kubelet[3440]: E0625 18:37:45.169849 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.170552 kubelet[3440]: E0625 18:37:45.170108 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.170552 kubelet[3440]: W0625 18:37:45.170118 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.170552 kubelet[3440]: E0625 18:37:45.170150 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.170552 kubelet[3440]: E0625 18:37:45.170413 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.170552 kubelet[3440]: W0625 18:37:45.170423 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.170552 kubelet[3440]: E0625 18:37:45.170454 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.170924 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.171648 kubelet[3440]: W0625 18:37:45.170940 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.170965 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.171268 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.171648 kubelet[3440]: W0625 18:37:45.171279 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.171364 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.171549 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.171648 kubelet[3440]: W0625 18:37:45.171558 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.171648 kubelet[3440]: E0625 18:37:45.171644 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.172181 kubelet[3440]: E0625 18:37:45.171845 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.172181 kubelet[3440]: W0625 18:37:45.171855 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.172181 kubelet[3440]: E0625 18:37:45.171876 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.172181 kubelet[3440]: E0625 18:37:45.172118 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.172181 kubelet[3440]: W0625 18:37:45.172128 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.172181 kubelet[3440]: E0625 18:37:45.172159 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.172463 kubelet[3440]: E0625 18:37:45.172377 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.172463 kubelet[3440]: W0625 18:37:45.172386 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.172463 kubelet[3440]: E0625 18:37:45.172417 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.173933 kubelet[3440]: E0625 18:37:45.172722 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.173933 kubelet[3440]: W0625 18:37:45.172734 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.173933 kubelet[3440]: E0625 18:37:45.172762 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.173933 kubelet[3440]: E0625 18:37:45.173708 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.173933 kubelet[3440]: W0625 18:37:45.173720 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.173933 kubelet[3440]: E0625 18:37:45.173806 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.174275 kubelet[3440]: E0625 18:37:45.173999 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.174275 kubelet[3440]: W0625 18:37:45.174009 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.174275 kubelet[3440]: E0625 18:37:45.174027 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.174275 kubelet[3440]: E0625 18:37:45.174237 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.174275 kubelet[3440]: W0625 18:37:45.174246 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.174275 kubelet[3440]: E0625 18:37:45.174274 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.174537 kubelet[3440]: E0625 18:37:45.174499 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.174537 kubelet[3440]: W0625 18:37:45.174508 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.174537 kubelet[3440]: E0625 18:37:45.174523 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:45.175716 kubelet[3440]: E0625 18:37:45.175326 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:45.175716 kubelet[3440]: W0625 18:37:45.175339 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:45.175716 kubelet[3440]: E0625 18:37:45.175357 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.053161 kubelet[3440]: I0625 18:37:46.053066 3440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:37:46.096849 containerd[1971]: time="2024-06-25T18:37:46.096802454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:46.099911 containerd[1971]: time="2024-06-25T18:37:46.099777018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:37:46.105650 containerd[1971]: time="2024-06-25T18:37:46.102986896Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:46.109838 containerd[1971]: time="2024-06-25T18:37:46.109791935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:46.112526 containerd[1971]: time="2024-06-25T18:37:46.112461915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.652963313s" Jun 25 18:37:46.112731 containerd[1971]: time="2024-06-25T18:37:46.112710331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:37:46.117150 containerd[1971]: time="2024-06-25T18:37:46.117108825Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:37:46.146227 kubelet[3440]: E0625 18:37:46.145483 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.146227 kubelet[3440]: W0625 18:37:46.145507 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.146227 kubelet[3440]: E0625 18:37:46.145540 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.146865 kubelet[3440]: E0625 18:37:46.146648 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.146865 kubelet[3440]: W0625 18:37:46.146686 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.146865 kubelet[3440]: E0625 18:37:46.146713 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.148327 kubelet[3440]: E0625 18:37:46.148313 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.148446 kubelet[3440]: W0625 18:37:46.148434 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.148547 kubelet[3440]: E0625 18:37:46.148537 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.148916 kubelet[3440]: E0625 18:37:46.148904 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.153909 kubelet[3440]: W0625 18:37:46.148961 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.148981 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.149522 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.153909 kubelet[3440]: W0625 18:37:46.149535 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.149569 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.149825 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.153909 kubelet[3440]: W0625 18:37:46.149834 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.149850 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.153909 kubelet[3440]: E0625 18:37:46.150179 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.153909 kubelet[3440]: W0625 18:37:46.150189 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.150206 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.150438 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.154351 kubelet[3440]: W0625 18:37:46.150448 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.150465 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.150899 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.154351 kubelet[3440]: W0625 18:37:46.150911 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.150928 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.151144 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.154351 kubelet[3440]: W0625 18:37:46.151153 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154351 kubelet[3440]: E0625 18:37:46.151167 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.154767 kubelet[3440]: E0625 18:37:46.153064 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.154767 kubelet[3440]: W0625 18:37:46.153079 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154767 kubelet[3440]: E0625 18:37:46.153100 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.154767 kubelet[3440]: E0625 18:37:46.153406 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.154767 kubelet[3440]: W0625 18:37:46.153416 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.154767 kubelet[3440]: E0625 18:37:46.153436 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155115 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.155873 kubelet[3440]: W0625 18:37:46.155128 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155148 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155383 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.155873 kubelet[3440]: W0625 18:37:46.155393 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155409 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155621 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.155873 kubelet[3440]: W0625 18:37:46.155629 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.155873 kubelet[3440]: E0625 18:37:46.155644 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.162329 containerd[1971]: time="2024-06-25T18:37:46.161906159Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f\"" Jun 25 18:37:46.163735 containerd[1971]: time="2024-06-25T18:37:46.163524587Z" level=info msg="StartContainer for \"fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f\"" Jun 25 18:37:46.189768 kubelet[3440]: E0625 18:37:46.187515 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.189768 kubelet[3440]: W0625 18:37:46.187539 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.189768 kubelet[3440]: E0625 18:37:46.187681 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.191246 kubelet[3440]: E0625 18:37:46.191090 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.191246 kubelet[3440]: W0625 18:37:46.191113 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.191246 kubelet[3440]: E0625 18:37:46.191154 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.193679 kubelet[3440]: E0625 18:37:46.191510 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.193679 kubelet[3440]: W0625 18:37:46.191541 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.193679 kubelet[3440]: E0625 18:37:46.191569 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.193947 kubelet[3440]: E0625 18:37:46.193751 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.193947 kubelet[3440]: W0625 18:37:46.193765 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.193947 kubelet[3440]: E0625 18:37:46.193788 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.194512 kubelet[3440]: E0625 18:37:46.194170 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.195721 kubelet[3440]: W0625 18:37:46.194185 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.199693 kubelet[3440]: E0625 18:37:46.199648 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.200046 kubelet[3440]: E0625 18:37:46.200021 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.200322 kubelet[3440]: W0625 18:37:46.200247 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.200507 kubelet[3440]: E0625 18:37:46.200419 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.202621 kubelet[3440]: E0625 18:37:46.202595 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.202621 kubelet[3440]: W0625 18:37:46.202619 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.206596 kubelet[3440]: E0625 18:37:46.206037 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.206596 kubelet[3440]: W0625 18:37:46.206060 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.206799 kubelet[3440]: E0625 18:37:46.206467 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.206851 kubelet[3440]: E0625 18:37:46.206842 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.207138 kubelet[3440]: E0625 18:37:46.206997 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.207327 kubelet[3440]: W0625 18:37:46.207143 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.207327 kubelet[3440]: E0625 18:37:46.207277 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.208401 kubelet[3440]: E0625 18:37:46.208161 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.208570 kubelet[3440]: W0625 18:37:46.208421 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.208570 kubelet[3440]: E0625 18:37:46.208499 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.208932 kubelet[3440]: E0625 18:37:46.208796 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.208932 kubelet[3440]: W0625 18:37:46.208826 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.208932 kubelet[3440]: E0625 18:37:46.208846 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.209350 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.210885 kubelet[3440]: W0625 18:37:46.209364 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.209426 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.209904 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.210885 kubelet[3440]: W0625 18:37:46.209915 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.209934 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.210200 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.210885 kubelet[3440]: W0625 18:37:46.210224 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.210244 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.210885 kubelet[3440]: E0625 18:37:46.210547 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.211406 kubelet[3440]: W0625 18:37:46.210557 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.211406 kubelet[3440]: E0625 18:37:46.210582 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.211406 kubelet[3440]: E0625 18:37:46.211072 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.211406 kubelet[3440]: W0625 18:37:46.211083 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.211406 kubelet[3440]: E0625 18:37:46.211110 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.211406 kubelet[3440]: E0625 18:37:46.211348 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.211406 kubelet[3440]: W0625 18:37:46.211358 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.211406 kubelet[3440]: E0625 18:37:46.211375 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.211768 kubelet[3440]: E0625 18:37:46.211633 3440 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:46.211768 kubelet[3440]: W0625 18:37:46.211678 3440 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:46.211768 kubelet[3440]: E0625 18:37:46.211695 3440 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:46.247911 systemd[1]: Started cri-containerd-fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f.scope - libcontainer container fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f. Jun 25 18:37:46.353470 containerd[1971]: time="2024-06-25T18:37:46.353352901Z" level=info msg="StartContainer for \"fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f\" returns successfully" Jun 25 18:37:46.392876 systemd[1]: cri-containerd-fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f.scope: Deactivated successfully. Jun 25 18:37:46.473564 systemd[1]: run-containerd-runc-k8s.io-fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f-runc.lBuQ7Z.mount: Deactivated successfully. Jun 25 18:37:46.486773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f-rootfs.mount: Deactivated successfully. Jun 25 18:37:46.846776 kubelet[3440]: E0625 18:37:46.846735 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:46.917014 containerd[1971]: time="2024-06-25T18:37:46.904173120Z" level=info msg="shim disconnected" id=fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f namespace=k8s.io Jun 25 18:37:46.917014 containerd[1971]: time="2024-06-25T18:37:46.916984793Z" level=warning msg="cleaning up after shim disconnected" id=fb666d5c81155932dc855e9fc3721583b4e25265584f9b03cf75cf8ebbe99b8f namespace=k8s.io Jun 25 18:37:46.917014 containerd[1971]: time="2024-06-25T18:37:46.917008137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:37:47.068681 containerd[1971]: time="2024-06-25T18:37:47.067779258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:37:47.099195 kubelet[3440]: I0625 18:37:47.096601 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-778cc54b77-xcb64" podStartSLOduration=3.793541807 podCreationTimestamp="2024-06-25 18:37:40 +0000 UTC" firstStartedPulling="2024-06-25 18:37:41.153648046 +0000 UTC m=+18.648751787" lastFinishedPulling="2024-06-25 18:37:44.456626866 +0000 UTC m=+21.951730616" observedRunningTime="2024-06-25 18:37:45.107213694 +0000 UTC m=+22.602317454" watchObservedRunningTime="2024-06-25 18:37:47.096520636 +0000 UTC m=+24.591624396" Jun 25 18:37:48.850561 kubelet[3440]: E0625 18:37:48.848405 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:50.848576 kubelet[3440]: E0625 18:37:50.848539 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:52.727909 kubelet[3440]: I0625 18:37:52.727874 3440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:37:52.848436 kubelet[3440]: E0625 18:37:52.848284 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:53.232767 containerd[1971]: time="2024-06-25T18:37:53.232077389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:53.234640 containerd[1971]: time="2024-06-25T18:37:53.234588213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:37:53.236686 containerd[1971]: time="2024-06-25T18:37:53.236475115Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:53.246267 containerd[1971]: time="2024-06-25T18:37:53.246218871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:53.248236 containerd[1971]: time="2024-06-25T18:37:53.247924183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.18008935s" Jun 25 18:37:53.248441 containerd[1971]: time="2024-06-25T18:37:53.248416091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:37:53.256941 containerd[1971]: time="2024-06-25T18:37:53.256889429Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:37:53.345647 containerd[1971]: time="2024-06-25T18:37:53.345568019Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112\"" Jun 25 18:37:53.347710 containerd[1971]: time="2024-06-25T18:37:53.346557388Z" level=info msg="StartContainer for \"e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112\"" Jun 25 18:37:53.456985 systemd[1]: Started cri-containerd-e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112.scope - libcontainer container e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112. Jun 25 18:37:53.548501 containerd[1971]: time="2024-06-25T18:37:53.548196691Z" level=info msg="StartContainer for \"e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112\" returns successfully" Jun 25 18:37:54.848453 kubelet[3440]: E0625 18:37:54.846890 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:56.847978 kubelet[3440]: E0625 18:37:56.847051 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:57.735400 systemd[1]: cri-containerd-e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112.scope: Deactivated successfully. Jun 25 18:37:57.800845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112-rootfs.mount: Deactivated successfully. Jun 25 18:37:57.820214 containerd[1971]: time="2024-06-25T18:37:57.820140182Z" level=info msg="shim disconnected" id=e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112 namespace=k8s.io Jun 25 18:37:57.820214 containerd[1971]: time="2024-06-25T18:37:57.820210408Z" level=warning msg="cleaning up after shim disconnected" id=e10326dd448a71e1ad8a655fc1b576ed353201ffc0c48f22920c3df15e72d112 namespace=k8s.io Jun 25 18:37:57.820214 containerd[1971]: time="2024-06-25T18:37:57.820222470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:37:57.861671 kubelet[3440]: I0625 18:37:57.859825 3440 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:37:57.901286 kubelet[3440]: I0625 18:37:57.901240 3440 topology_manager.go:215] "Topology Admit Handler" podUID="d552fb93-79f7-4773-a1f5-9c9523f4d422" podNamespace="kube-system" podName="coredns-5dd5756b68-j5lgb" Jun 25 18:37:57.906452 kubelet[3440]: I0625 18:37:57.906414 3440 topology_manager.go:215] "Topology Admit Handler" podUID="4f07a82d-4c15-4612-8dd9-058a38c3b4c8" podNamespace="calico-system" podName="calico-kube-controllers-f7ddf6898-lw9bj" Jun 25 18:37:57.910168 kubelet[3440]: I0625 18:37:57.910132 3440 topology_manager.go:215] "Topology Admit Handler" podUID="efacc6a4-8734-4d68-9427-e531fbc08015" podNamespace="kube-system" podName="coredns-5dd5756b68-lfsz2" Jun 25 18:37:57.923889 systemd[1]: Created slice kubepods-burstable-podd552fb93_79f7_4773_a1f5_9c9523f4d422.slice - libcontainer container kubepods-burstable-podd552fb93_79f7_4773_a1f5_9c9523f4d422.slice. Jun 25 18:37:57.939179 systemd[1]: Created slice kubepods-besteffort-pod4f07a82d_4c15_4612_8dd9_058a38c3b4c8.slice - libcontainer container kubepods-besteffort-pod4f07a82d_4c15_4612_8dd9_058a38c3b4c8.slice. Jun 25 18:37:57.958433 systemd[1]: Created slice kubepods-burstable-podefacc6a4_8734_4d68_9427_e531fbc08015.slice - libcontainer container kubepods-burstable-podefacc6a4_8734_4d68_9427_e531fbc08015.slice. Jun 25 18:37:58.025019 kubelet[3440]: I0625 18:37:58.024974 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcvc\" (UniqueName: \"kubernetes.io/projected/4f07a82d-4c15-4612-8dd9-058a38c3b4c8-kube-api-access-phcvc\") pod \"calico-kube-controllers-f7ddf6898-lw9bj\" (UID: \"4f07a82d-4c15-4612-8dd9-058a38c3b4c8\") " pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" Jun 25 18:37:58.025280 kubelet[3440]: I0625 18:37:58.025256 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sn9t\" (UniqueName: \"kubernetes.io/projected/efacc6a4-8734-4d68-9427-e531fbc08015-kube-api-access-6sn9t\") pod \"coredns-5dd5756b68-lfsz2\" (UID: \"efacc6a4-8734-4d68-9427-e531fbc08015\") " pod="kube-system/coredns-5dd5756b68-lfsz2" Jun 25 18:37:58.025503 kubelet[3440]: I0625 18:37:58.025307 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f07a82d-4c15-4612-8dd9-058a38c3b4c8-tigera-ca-bundle\") pod \"calico-kube-controllers-f7ddf6898-lw9bj\" (UID: \"4f07a82d-4c15-4612-8dd9-058a38c3b4c8\") " pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" Jun 25 18:37:58.025503 kubelet[3440]: I0625 18:37:58.025341 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efacc6a4-8734-4d68-9427-e531fbc08015-config-volume\") pod \"coredns-5dd5756b68-lfsz2\" (UID: \"efacc6a4-8734-4d68-9427-e531fbc08015\") " pod="kube-system/coredns-5dd5756b68-lfsz2" Jun 25 18:37:58.025503 kubelet[3440]: I0625 18:37:58.025379 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpttf\" (UniqueName: \"kubernetes.io/projected/d552fb93-79f7-4773-a1f5-9c9523f4d422-kube-api-access-kpttf\") pod \"coredns-5dd5756b68-j5lgb\" (UID: \"d552fb93-79f7-4773-a1f5-9c9523f4d422\") " pod="kube-system/coredns-5dd5756b68-j5lgb" Jun 25 18:37:58.025503 kubelet[3440]: I0625 18:37:58.025417 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d552fb93-79f7-4773-a1f5-9c9523f4d422-config-volume\") pod \"coredns-5dd5756b68-j5lgb\" (UID: \"d552fb93-79f7-4773-a1f5-9c9523f4d422\") " pod="kube-system/coredns-5dd5756b68-j5lgb" Jun 25 18:37:58.111363 containerd[1971]: time="2024-06-25T18:37:58.111034709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:37:58.248685 containerd[1971]: time="2024-06-25T18:37:58.248209213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j5lgb,Uid:d552fb93-79f7-4773-a1f5-9c9523f4d422,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:58.265749 containerd[1971]: time="2024-06-25T18:37:58.264905499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfsz2,Uid:efacc6a4-8734-4d68-9427-e531fbc08015,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:58.550435 containerd[1971]: time="2024-06-25T18:37:58.550360245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7ddf6898-lw9bj,Uid:4f07a82d-4c15-4612-8dd9-058a38c3b4c8,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:58.648415 containerd[1971]: time="2024-06-25T18:37:58.648264235Z" level=error msg="Failed to destroy network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.658600 containerd[1971]: time="2024-06-25T18:37:58.658523578Z" level=error msg="Failed to destroy network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.658990 containerd[1971]: time="2024-06-25T18:37:58.658693758Z" level=error msg="encountered an error cleaning up failed sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.658990 containerd[1971]: time="2024-06-25T18:37:58.658786354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfsz2,Uid:efacc6a4-8734-4d68-9427-e531fbc08015,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.659493 kubelet[3440]: E0625 18:37:58.659378 3440 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.659842 kubelet[3440]: E0625 18:37:58.659717 3440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lfsz2" Jun 25 18:37:58.659842 kubelet[3440]: E0625 18:37:58.659761 3440 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lfsz2" Jun 25 18:37:58.660051 kubelet[3440]: E0625 18:37:58.659864 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lfsz2_kube-system(efacc6a4-8734-4d68-9427-e531fbc08015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lfsz2_kube-system(efacc6a4-8734-4d68-9427-e531fbc08015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lfsz2" podUID="efacc6a4-8734-4d68-9427-e531fbc08015" Jun 25 18:37:58.663681 containerd[1971]: time="2024-06-25T18:37:58.663596198Z" level=error msg="encountered an error cleaning up failed sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.664462 containerd[1971]: time="2024-06-25T18:37:58.664290938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j5lgb,Uid:d552fb93-79f7-4773-a1f5-9c9523f4d422,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.665311 kubelet[3440]: E0625 18:37:58.664881 3440 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.665311 kubelet[3440]: E0625 18:37:58.664941 3440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-j5lgb" Jun 25 18:37:58.665311 kubelet[3440]: E0625 18:37:58.664973 3440 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-j5lgb" Jun 25 18:37:58.665477 kubelet[3440]: E0625 18:37:58.665032 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-j5lgb_kube-system(d552fb93-79f7-4773-a1f5-9c9523f4d422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-j5lgb_kube-system(d552fb93-79f7-4773-a1f5-9c9523f4d422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-j5lgb" podUID="d552fb93-79f7-4773-a1f5-9c9523f4d422" Jun 25 18:37:58.733563 containerd[1971]: time="2024-06-25T18:37:58.733513907Z" level=error msg="Failed to destroy network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.733976 containerd[1971]: time="2024-06-25T18:37:58.733937894Z" level=error msg="encountered an error cleaning up failed sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.734095 containerd[1971]: time="2024-06-25T18:37:58.734006454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7ddf6898-lw9bj,Uid:4f07a82d-4c15-4612-8dd9-058a38c3b4c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.734874 kubelet[3440]: E0625 18:37:58.734346 3440 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.734874 kubelet[3440]: E0625 18:37:58.734414 3440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" Jun 25 18:37:58.734874 kubelet[3440]: E0625 18:37:58.734436 3440 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" Jun 25 18:37:58.735054 kubelet[3440]: E0625 18:37:58.734565 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7ddf6898-lw9bj_calico-system(4f07a82d-4c15-4612-8dd9-058a38c3b4c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7ddf6898-lw9bj_calico-system(4f07a82d-4c15-4612-8dd9-058a38c3b4c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" podUID="4f07a82d-4c15-4612-8dd9-058a38c3b4c8" Jun 25 18:37:58.803587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086-shm.mount: Deactivated successfully. Jun 25 18:37:58.855148 systemd[1]: Created slice kubepods-besteffort-pod53634da2_c3fe_455c_8218_c4b393d92a3f.slice - libcontainer container kubepods-besteffort-pod53634da2_c3fe_455c_8218_c4b393d92a3f.slice. Jun 25 18:37:58.858318 containerd[1971]: time="2024-06-25T18:37:58.858269844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cm5c,Uid:53634da2-c3fe-455c-8218-c4b393d92a3f,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:58.963489 containerd[1971]: time="2024-06-25T18:37:58.963434068Z" level=error msg="Failed to destroy network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.968058 containerd[1971]: time="2024-06-25T18:37:58.967960849Z" level=error msg="encountered an error cleaning up failed sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.968171 containerd[1971]: time="2024-06-25T18:37:58.968112423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cm5c,Uid:53634da2-c3fe-455c-8218-c4b393d92a3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.968625 kubelet[3440]: E0625 18:37:58.968561 3440 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:58.969326 kubelet[3440]: E0625 18:37:58.968676 3440 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:58.969326 kubelet[3440]: E0625 18:37:58.968737 3440 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4cm5c" Jun 25 18:37:58.969326 kubelet[3440]: E0625 18:37:58.968833 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4cm5c_calico-system(53634da2-c3fe-455c-8218-c4b393d92a3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4cm5c_calico-system(53634da2-c3fe-455c-8218-c4b393d92a3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:58.970583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234-shm.mount: Deactivated successfully. Jun 25 18:37:59.115357 kubelet[3440]: I0625 18:37:59.113828 3440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:37:59.115357 kubelet[3440]: I0625 18:37:59.115338 3440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:37:59.120771 containerd[1971]: time="2024-06-25T18:37:59.120131365Z" level=info msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" Jun 25 18:37:59.120771 containerd[1971]: time="2024-06-25T18:37:59.120421674Z" level=info msg="Ensure that sandbox cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234 in task-service has been cleanup successfully" Jun 25 18:37:59.124528 kubelet[3440]: I0625 18:37:59.124493 3440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:37:59.132687 containerd[1971]: time="2024-06-25T18:37:59.132291300Z" level=info msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" Jun 25 18:37:59.135694 containerd[1971]: time="2024-06-25T18:37:59.133787191Z" level=info msg="Ensure that sandbox ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3 in task-service has been cleanup successfully" Jun 25 18:37:59.139102 containerd[1971]: time="2024-06-25T18:37:59.138844685Z" level=info msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" Jun 25 18:37:59.141757 kubelet[3440]: I0625 18:37:59.141726 3440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:37:59.142474 containerd[1971]: time="2024-06-25T18:37:59.142429063Z" level=info msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" Jun 25 18:37:59.142908 containerd[1971]: time="2024-06-25T18:37:59.142866790Z" level=info msg="Ensure that sandbox 3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086 in task-service has been cleanup successfully" Jun 25 18:37:59.144565 containerd[1971]: time="2024-06-25T18:37:59.143390750Z" level=info msg="Ensure that sandbox de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796 in task-service has been cleanup successfully" Jun 25 18:37:59.244806 containerd[1971]: time="2024-06-25T18:37:59.244740146Z" level=error msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" failed" error="failed to destroy network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:59.245086 kubelet[3440]: E0625 18:37:59.245060 3440 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:37:59.245216 kubelet[3440]: E0625 18:37:59.245153 3440 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796"} Jun 25 18:37:59.245216 kubelet[3440]: E0625 18:37:59.245201 3440 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f07a82d-4c15-4612-8dd9-058a38c3b4c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:59.245531 kubelet[3440]: E0625 18:37:59.245251 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f07a82d-4c15-4612-8dd9-058a38c3b4c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" podUID="4f07a82d-4c15-4612-8dd9-058a38c3b4c8" Jun 25 18:37:59.254184 containerd[1971]: time="2024-06-25T18:37:59.253862498Z" level=error msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" failed" error="failed to destroy network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:59.254470 kubelet[3440]: E0625 18:37:59.254307 3440 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:37:59.254470 kubelet[3440]: E0625 18:37:59.254352 3440 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234"} Jun 25 18:37:59.254470 kubelet[3440]: E0625 18:37:59.254403 3440 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53634da2-c3fe-455c-8218-c4b393d92a3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:59.254470 kubelet[3440]: E0625 18:37:59.254440 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53634da2-c3fe-455c-8218-c4b393d92a3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4cm5c" podUID="53634da2-c3fe-455c-8218-c4b393d92a3f" Jun 25 18:37:59.293170 containerd[1971]: time="2024-06-25T18:37:59.293032306Z" level=error msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" failed" error="failed to destroy network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:59.293517 kubelet[3440]: E0625 18:37:59.293476 3440 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:37:59.293642 kubelet[3440]: E0625 18:37:59.293536 3440 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3"} Jun 25 18:37:59.293642 kubelet[3440]: E0625 18:37:59.293593 3440 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efacc6a4-8734-4d68-9427-e531fbc08015\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:59.293949 kubelet[3440]: E0625 18:37:59.293710 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efacc6a4-8734-4d68-9427-e531fbc08015\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lfsz2" podUID="efacc6a4-8734-4d68-9427-e531fbc08015" Jun 25 18:37:59.300290 containerd[1971]: time="2024-06-25T18:37:59.300233898Z" level=error msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" failed" error="failed to destroy network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:59.301387 kubelet[3440]: E0625 18:37:59.300498 3440 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:37:59.301387 kubelet[3440]: E0625 18:37:59.300552 3440 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086"} Jun 25 18:37:59.301387 kubelet[3440]: E0625 18:37:59.301083 3440 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d552fb93-79f7-4773-a1f5-9c9523f4d422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:59.301387 kubelet[3440]: E0625 18:37:59.301169 3440 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d552fb93-79f7-4773-a1f5-9c9523f4d422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-j5lgb" podUID="d552fb93-79f7-4773-a1f5-9c9523f4d422" Jun 25 18:38:07.946390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177954245.mount: Deactivated successfully. Jun 25 18:38:08.063870 containerd[1971]: time="2024-06-25T18:38:08.058548989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:38:08.064487 containerd[1971]: time="2024-06-25T18:38:08.056320911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:08.096275 containerd[1971]: time="2024-06-25T18:38:08.096223268Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:08.100168 containerd[1971]: time="2024-06-25T18:38:08.100120217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:08.101016 containerd[1971]: time="2024-06-25T18:38:08.100976929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.989895376s" Jun 25 18:38:08.101155 containerd[1971]: time="2024-06-25T18:38:08.101132493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:38:08.201455 containerd[1971]: time="2024-06-25T18:38:08.201332003Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:38:08.247381 containerd[1971]: time="2024-06-25T18:38:08.247040831Z" level=info msg="CreateContainer within sandbox \"3b456c0c39fff30d57ace9c1fa596417ef9cad693b1477a78fdbc078230fbfc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423\"" Jun 25 18:38:08.267420 containerd[1971]: time="2024-06-25T18:38:08.267339435Z" level=info msg="StartContainer for \"6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423\"" Jun 25 18:38:08.431955 systemd[1]: Started cri-containerd-6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423.scope - libcontainer container 6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423. Jun 25 18:38:08.518028 containerd[1971]: time="2024-06-25T18:38:08.517782950Z" level=info msg="StartContainer for \"6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423\" returns successfully" Jun 25 18:38:08.814591 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:38:08.816078 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:38:09.853151 containerd[1971]: time="2024-06-25T18:38:09.853108674Z" level=info msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" Jun 25 18:38:10.047844 kubelet[3440]: I0625 18:38:10.047794 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zkpvl" podStartSLOduration=3.113238536 podCreationTimestamp="2024-06-25 18:37:40 +0000 UTC" firstStartedPulling="2024-06-25 18:37:41.193151946 +0000 UTC m=+18.688255692" lastFinishedPulling="2024-06-25 18:38:08.10162753 +0000 UTC m=+45.596731267" observedRunningTime="2024-06-25 18:38:09.28116497 +0000 UTC m=+46.776268757" watchObservedRunningTime="2024-06-25 18:38:10.021714111 +0000 UTC m=+47.516817871" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.029 [INFO][4554] k8s.go 608: Cleaning up netns ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.030 [INFO][4554] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" iface="eth0" netns="/var/run/netns/cni-29f935e8-8061-a3b6-a299-4b2fea864f5c" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.030 [INFO][4554] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" iface="eth0" netns="/var/run/netns/cni-29f935e8-8061-a3b6-a299-4b2fea864f5c" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.031 [INFO][4554] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" iface="eth0" netns="/var/run/netns/cni-29f935e8-8061-a3b6-a299-4b2fea864f5c" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.031 [INFO][4554] k8s.go 615: Releasing IP address(es) ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.031 [INFO][4554] utils.go 188: Calico CNI releasing IP address ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.316 [INFO][4560] ipam_plugin.go 411: Releasing address using handleID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.317 [INFO][4560] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.317 [INFO][4560] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.364 [WARNING][4560] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.364 [INFO][4560] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.367 [INFO][4560] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:10.375779 containerd[1971]: 2024-06-25 18:38:10.372 [INFO][4554] k8s.go 621: Teardown processing complete. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:10.381039 containerd[1971]: time="2024-06-25T18:38:10.379729317Z" level=info msg="TearDown network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" successfully" Jun 25 18:38:10.381039 containerd[1971]: time="2024-06-25T18:38:10.379771819Z" level=info msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" returns successfully" Jun 25 18:38:10.381039 containerd[1971]: time="2024-06-25T18:38:10.380556897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7ddf6898-lw9bj,Uid:4f07a82d-4c15-4612-8dd9-058a38c3b4c8,Namespace:calico-system,Attempt:1,}" Jun 25 18:38:10.383370 systemd[1]: run-netns-cni\x2d29f935e8\x2d8061\x2da3b6\x2da299\x2d4b2fea864f5c.mount: Deactivated successfully. Jun 25 18:38:10.950913 containerd[1971]: time="2024-06-25T18:38:10.950867386Z" level=info msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" Jun 25 18:38:10.954205 containerd[1971]: time="2024-06-25T18:38:10.953940730Z" level=info msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" Jun 25 18:38:11.384913 systemd-networkd[1806]: caliaefbdae7683: Link UP Jun 25 18:38:11.386710 systemd-networkd[1806]: caliaefbdae7683: Gained carrier Jun 25 18:38:11.386888 (udev-worker)[4493]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:10.827 [INFO][4674] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:10.850 [INFO][4674] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0 calico-kube-controllers-f7ddf6898- calico-system 4f07a82d-4c15-4612-8dd9-058a38c3b4c8 723 0 2024-06-25 18:37:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7ddf6898 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-210 calico-kube-controllers-f7ddf6898-lw9bj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaefbdae7683 [] []}} ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:10.852 [INFO][4674] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.156 [INFO][4683] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" HandleID="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.184 [INFO][4683] ipam_plugin.go 264: Auto assigning IP ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" HandleID="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003113e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-210", "pod":"calico-kube-controllers-f7ddf6898-lw9bj", "timestamp":"2024-06-25 18:38:11.155489069 +0000 UTC"}, Hostname:"ip-172-31-29-210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.185 [INFO][4683] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.185 [INFO][4683] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.185 [INFO][4683] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-210' Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.190 [INFO][4683] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.212 [INFO][4683] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.224 [INFO][4683] ipam.go 489: Trying affinity for 192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.235 [INFO][4683] ipam.go 155: Attempting to load block cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.242 [INFO][4683] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.242 [INFO][4683] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.0/26 handle="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.252 [INFO][4683] ipam.go 1685: Creating new handle: k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161 Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.269 [INFO][4683] ipam.go 1203: Writing block in order to claim IPs block=192.168.68.0/26 handle="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.339 [INFO][4683] ipam.go 1216: Successfully claimed IPs: [192.168.68.1/26] block=192.168.68.0/26 handle="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.342 [INFO][4683] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.1/26] handle="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" host="ip-172-31-29-210" Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.343 [INFO][4683] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.452205 containerd[1971]: 2024-06-25 18:38:11.344 [INFO][4683] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.68.1/26] IPv6=[] ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" HandleID="k8s-pod-network.b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.355 [INFO][4674] k8s.go 386: Populated endpoint ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0", GenerateName:"calico-kube-controllers-f7ddf6898-", Namespace:"calico-system", SelfLink:"", UID:"4f07a82d-4c15-4612-8dd9-058a38c3b4c8", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7ddf6898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"", Pod:"calico-kube-controllers-f7ddf6898-lw9bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaefbdae7683", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.356 [INFO][4674] k8s.go 387: Calico CNI using IPs: [192.168.68.1/32] ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.356 [INFO][4674] dataplane_linux.go 68: Setting the host side veth name to caliaefbdae7683 ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.376 [INFO][4674] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.376 [INFO][4674] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0", GenerateName:"calico-kube-controllers-f7ddf6898-", Namespace:"calico-system", SelfLink:"", UID:"4f07a82d-4c15-4612-8dd9-058a38c3b4c8", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7ddf6898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161", Pod:"calico-kube-controllers-f7ddf6898-lw9bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaefbdae7683", MAC:"ba:1b:05:46:d8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:11.453488 containerd[1971]: 2024-06-25 18:38:11.419 [INFO][4674] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161" Namespace="calico-system" Pod="calico-kube-controllers-f7ddf6898-lw9bj" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.300 [INFO][4722] k8s.go 608: Cleaning up netns ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.301 [INFO][4722] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" iface="eth0" netns="/var/run/netns/cni-1de842c5-660d-459a-bd65-3d921fb4f17f" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.301 [INFO][4722] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" iface="eth0" netns="/var/run/netns/cni-1de842c5-660d-459a-bd65-3d921fb4f17f" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.303 [INFO][4722] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" iface="eth0" netns="/var/run/netns/cni-1de842c5-660d-459a-bd65-3d921fb4f17f" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.303 [INFO][4722] k8s.go 615: Releasing IP address(es) ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.303 [INFO][4722] utils.go 188: Calico CNI releasing IP address ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.535 [INFO][4735] ipam_plugin.go 411: Releasing address using handleID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.540 [INFO][4735] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.540 [INFO][4735] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.577 [WARNING][4735] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.577 [INFO][4735] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.583 [INFO][4735] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.617150 containerd[1971]: 2024-06-25 18:38:11.592 [INFO][4722] k8s.go 621: Teardown processing complete. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:11.620625 containerd[1971]: time="2024-06-25T18:38:11.618843360Z" level=info msg="TearDown network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" successfully" Jun 25 18:38:11.620625 containerd[1971]: time="2024-06-25T18:38:11.618917124Z" level=info msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" returns successfully" Jun 25 18:38:11.632697 containerd[1971]: time="2024-06-25T18:38:11.632633143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cm5c,Uid:53634da2-c3fe-455c-8218-c4b393d92a3f,Namespace:calico-system,Attempt:1,}" Jun 25 18:38:11.633148 systemd[1]: run-netns-cni\x2d1de842c5\x2d660d\x2d459a\x2dbd65\x2d3d921fb4f17f.mount: Deactivated successfully. Jun 25 18:38:11.655697 containerd[1971]: time="2024-06-25T18:38:11.652137406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:11.655697 containerd[1971]: time="2024-06-25T18:38:11.652225167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:11.655697 containerd[1971]: time="2024-06-25T18:38:11.652257364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:11.655697 containerd[1971]: time="2024-06-25T18:38:11.652279008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.306 [INFO][4716] k8s.go 608: Cleaning up netns ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.306 [INFO][4716] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" iface="eth0" netns="/var/run/netns/cni-a176e8a3-b539-fac7-5120-80f8b5d480ae" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.307 [INFO][4716] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" iface="eth0" netns="/var/run/netns/cni-a176e8a3-b539-fac7-5120-80f8b5d480ae" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.307 [INFO][4716] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" iface="eth0" netns="/var/run/netns/cni-a176e8a3-b539-fac7-5120-80f8b5d480ae" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.307 [INFO][4716] k8s.go 615: Releasing IP address(es) ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.307 [INFO][4716] utils.go 188: Calico CNI releasing IP address ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.571 [INFO][4736] ipam_plugin.go 411: Releasing address using handleID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.573 [INFO][4736] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.588 [INFO][4736] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.665 [WARNING][4736] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.665 [INFO][4736] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.686 [INFO][4736] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.721193 containerd[1971]: 2024-06-25 18:38:11.705 [INFO][4716] k8s.go 621: Teardown processing complete. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:11.727159 containerd[1971]: time="2024-06-25T18:38:11.723758269Z" level=info msg="TearDown network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" successfully" Jun 25 18:38:11.727159 containerd[1971]: time="2024-06-25T18:38:11.723797458Z" level=info msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" returns successfully" Jun 25 18:38:11.727159 containerd[1971]: time="2024-06-25T18:38:11.725140065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfsz2,Uid:efacc6a4-8734-4d68-9427-e531fbc08015,Namespace:kube-system,Attempt:1,}" Jun 25 18:38:11.731230 systemd[1]: run-netns-cni\x2da176e8a3\x2db539\x2dfac7\x2d5120\x2d80f8b5d480ae.mount: Deactivated successfully. Jun 25 18:38:11.863021 systemd[1]: Started cri-containerd-b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161.scope - libcontainer container b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161. Jun 25 18:38:12.390154 (udev-worker)[4491]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:12.398232 systemd-networkd[1806]: cali729d3804f60: Link UP Jun 25 18:38:12.398546 systemd-networkd[1806]: cali729d3804f60: Gained carrier Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:11.991 [INFO][4804] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.054 [INFO][4804] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0 coredns-5dd5756b68- kube-system efacc6a4-8734-4d68-9427-e531fbc08015 728 0 2024-06-25 18:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-210 coredns-5dd5756b68-lfsz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali729d3804f60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.054 [INFO][4804] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.177 [INFO][4831] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" HandleID="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.232 [INFO][4831] ipam_plugin.go 264: Auto assigning IP ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" HandleID="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319bb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-210", "pod":"coredns-5dd5756b68-lfsz2", "timestamp":"2024-06-25 18:38:12.177606502 +0000 UTC"}, Hostname:"ip-172-31-29-210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.233 [INFO][4831] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.236 [INFO][4831] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.236 [INFO][4831] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-210' Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.243 [INFO][4831] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.262 [INFO][4831] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.298 [INFO][4831] ipam.go 489: Trying affinity for 192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.302 [INFO][4831] ipam.go 155: Attempting to load block cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.311 [INFO][4831] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.311 [INFO][4831] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.0/26 handle="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.315 [INFO][4831] ipam.go 1685: Creating new handle: k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2 Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.332 [INFO][4831] ipam.go 1203: Writing block in order to claim IPs block=192.168.68.0/26 handle="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.356 [INFO][4831] ipam.go 1216: Successfully claimed IPs: [192.168.68.2/26] block=192.168.68.0/26 handle="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.356 [INFO][4831] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.2/26] handle="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" host="ip-172-31-29-210" Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.357 [INFO][4831] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:12.476335 containerd[1971]: 2024-06-25 18:38:12.357 [INFO][4831] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.68.2/26] IPv6=[] ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" HandleID="k8s-pod-network.866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.482577 containerd[1971]: 2024-06-25 18:38:12.368 [INFO][4804] k8s.go 386: Populated endpoint ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"efacc6a4-8734-4d68-9427-e531fbc08015", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"", Pod:"coredns-5dd5756b68-lfsz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729d3804f60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:12.482577 containerd[1971]: 2024-06-25 18:38:12.368 [INFO][4804] k8s.go 387: Calico CNI using IPs: [192.168.68.2/32] ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.482577 containerd[1971]: 2024-06-25 18:38:12.368 [INFO][4804] dataplane_linux.go 68: Setting the host side veth name to cali729d3804f60 ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.482577 containerd[1971]: 2024-06-25 18:38:12.398 [INFO][4804] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.482577 containerd[1971]: 2024-06-25 18:38:12.405 [INFO][4804] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"efacc6a4-8734-4d68-9427-e531fbc08015", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2", Pod:"coredns-5dd5756b68-lfsz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729d3804f60", MAC:"46:e2:f8:12:f0:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:12.485290 containerd[1971]: 2024-06-25 18:38:12.449 [INFO][4804] k8s.go 500: Wrote updated endpoint to datastore ContainerID="866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2" Namespace="kube-system" Pod="coredns-5dd5756b68-lfsz2" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:12.485290 containerd[1971]: time="2024-06-25T18:38:12.481112325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7ddf6898-lw9bj,Uid:4f07a82d-4c15-4612-8dd9-058a38c3b4c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161\"" Jun 25 18:38:12.490381 containerd[1971]: time="2024-06-25T18:38:12.489511332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:38:12.563608 systemd-networkd[1806]: cali6f5b78d0501: Link UP Jun 25 18:38:12.569227 systemd-networkd[1806]: cali6f5b78d0501: Gained carrier Jun 25 18:38:12.605760 containerd[1971]: time="2024-06-25T18:38:12.603335896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:12.605760 containerd[1971]: time="2024-06-25T18:38:12.603475345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:12.605760 containerd[1971]: time="2024-06-25T18:38:12.603511653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:12.605760 containerd[1971]: time="2024-06-25T18:38:12.603533808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.058 [INFO][4795] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.097 [INFO][4795] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0 csi-node-driver- calico-system 53634da2-c3fe-455c-8218-c4b393d92a3f 729 0 2024-06-25 18:37:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-29-210 csi-node-driver-4cm5c eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali6f5b78d0501 [] []}} ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.097 [INFO][4795] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.240 [INFO][4839] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" HandleID="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.294 [INFO][4839] ipam_plugin.go 264: Auto assigning IP ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" HandleID="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032bb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-210", "pod":"csi-node-driver-4cm5c", "timestamp":"2024-06-25 18:38:12.24061247 +0000 UTC"}, Hostname:"ip-172-31-29-210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.297 [INFO][4839] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.358 [INFO][4839] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.358 [INFO][4839] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-210' Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.369 [INFO][4839] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.406 [INFO][4839] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.433 [INFO][4839] ipam.go 489: Trying affinity for 192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.455 [INFO][4839] ipam.go 155: Attempting to load block cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.464 [INFO][4839] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.465 [INFO][4839] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.0/26 handle="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.475 [INFO][4839] ipam.go 1685: Creating new handle: k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.496 [INFO][4839] ipam.go 1203: Writing block in order to claim IPs block=192.168.68.0/26 handle="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.536 [INFO][4839] ipam.go 1216: Successfully claimed IPs: [192.168.68.3/26] block=192.168.68.0/26 handle="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.536 [INFO][4839] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.3/26] handle="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" host="ip-172-31-29-210" Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.536 [INFO][4839] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:12.671896 containerd[1971]: 2024-06-25 18:38:12.536 [INFO][4839] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.68.3/26] IPv6=[] ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" HandleID="k8s-pod-network.fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.552 [INFO][4795] k8s.go 386: Populated endpoint ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53634da2-c3fe-455c-8218-c4b393d92a3f", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"", Pod:"csi-node-driver-4cm5c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6f5b78d0501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.554 [INFO][4795] k8s.go 387: Calico CNI using IPs: [192.168.68.3/32] ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.554 [INFO][4795] dataplane_linux.go 68: Setting the host side veth name to cali6f5b78d0501 ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.572 [INFO][4795] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.577 [INFO][4795] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53634da2-c3fe-455c-8218-c4b393d92a3f", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e", Pod:"csi-node-driver-4cm5c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6f5b78d0501", MAC:"8a:95:92:25:f3:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:12.676574 containerd[1971]: 2024-06-25 18:38:12.652 [INFO][4795] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e" Namespace="calico-system" Pod="csi-node-driver-4cm5c" WorkloadEndpoint="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:12.707085 systemd[1]: Started cri-containerd-866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2.scope - libcontainer container 866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2. Jun 25 18:38:12.822629 containerd[1971]: time="2024-06-25T18:38:12.821677187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:12.822629 containerd[1971]: time="2024-06-25T18:38:12.821767514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:12.822629 containerd[1971]: time="2024-06-25T18:38:12.821799764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:12.822629 containerd[1971]: time="2024-06-25T18:38:12.821821078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:12.857528 containerd[1971]: time="2024-06-25T18:38:12.857489195Z" level=info msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" Jun 25 18:38:12.902957 systemd[1]: Started cri-containerd-fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e.scope - libcontainer container fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e. Jun 25 18:38:12.942569 systemd-networkd[1806]: caliaefbdae7683: Gained IPv6LL Jun 25 18:38:13.003901 containerd[1971]: time="2024-06-25T18:38:13.003850943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfsz2,Uid:efacc6a4-8734-4d68-9427-e531fbc08015,Namespace:kube-system,Attempt:1,} returns sandbox id \"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2\"" Jun 25 18:38:13.037633 containerd[1971]: time="2024-06-25T18:38:13.037576390Z" level=info msg="CreateContainer within sandbox \"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:38:13.107680 containerd[1971]: time="2024-06-25T18:38:13.106075752Z" level=info msg="CreateContainer within sandbox \"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63c95d8f7215cacfcd6b1fc327a8c32a0a188cb867e5c6b31cdf95d4b2a968eb\"" Jun 25 18:38:13.107680 containerd[1971]: time="2024-06-25T18:38:13.107128963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4cm5c,Uid:53634da2-c3fe-455c-8218-c4b393d92a3f,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e\"" Jun 25 18:38:13.108340 containerd[1971]: time="2024-06-25T18:38:13.108299695Z" level=info msg="StartContainer for \"63c95d8f7215cacfcd6b1fc327a8c32a0a188cb867e5c6b31cdf95d4b2a968eb\"" Jun 25 18:38:13.217239 systemd[1]: Started cri-containerd-63c95d8f7215cacfcd6b1fc327a8c32a0a188cb867e5c6b31cdf95d4b2a968eb.scope - libcontainer container 63c95d8f7215cacfcd6b1fc327a8c32a0a188cb867e5c6b31cdf95d4b2a968eb. Jun 25 18:38:13.354604 containerd[1971]: time="2024-06-25T18:38:13.354565451Z" level=info msg="StartContainer for \"63c95d8f7215cacfcd6b1fc327a8c32a0a188cb867e5c6b31cdf95d4b2a968eb\" returns successfully" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.226 [INFO][4970] k8s.go 608: Cleaning up netns ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.226 [INFO][4970] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" iface="eth0" netns="/var/run/netns/cni-dcc7eac3-4be5-d90a-ea51-84ea49cd89f0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.229 [INFO][4970] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" iface="eth0" netns="/var/run/netns/cni-dcc7eac3-4be5-d90a-ea51-84ea49cd89f0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.230 [INFO][4970] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" iface="eth0" netns="/var/run/netns/cni-dcc7eac3-4be5-d90a-ea51-84ea49cd89f0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.230 [INFO][4970] k8s.go 615: Releasing IP address(es) ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.230 [INFO][4970] utils.go 188: Calico CNI releasing IP address ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.368 [INFO][5025] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.369 [INFO][5025] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.370 [INFO][5025] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.389 [WARNING][5025] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.390 [INFO][5025] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.394 [INFO][5025] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.401704 containerd[1971]: 2024-06-25 18:38:13.398 [INFO][4970] k8s.go 621: Teardown processing complete. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:13.402345 containerd[1971]: time="2024-06-25T18:38:13.402088951Z" level=info msg="TearDown network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" successfully" Jun 25 18:38:13.402786 containerd[1971]: time="2024-06-25T18:38:13.402748002Z" level=info msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" returns successfully" Jun 25 18:38:13.404305 containerd[1971]: time="2024-06-25T18:38:13.404274856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j5lgb,Uid:d552fb93-79f7-4773-a1f5-9c9523f4d422,Namespace:kube-system,Attempt:1,}" Jun 25 18:38:13.642699 systemd[1]: run-netns-cni\x2ddcc7eac3\x2d4be5\x2dd90a\x2dea51\x2d84ea49cd89f0.mount: Deactivated successfully. Jun 25 18:38:13.868595 systemd-networkd[1806]: cali2065e08e24a: Link UP Jun 25 18:38:13.874044 systemd-networkd[1806]: cali2065e08e24a: Gained carrier Jun 25 18:38:13.901802 systemd-networkd[1806]: cali729d3804f60: Gained IPv6LL Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.570 [INFO][5050] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0 coredns-5dd5756b68- kube-system d552fb93-79f7-4773-a1f5-9c9523f4d422 749 0 2024-06-25 18:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-210 coredns-5dd5756b68-j5lgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2065e08e24a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.572 [INFO][5050] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.710 [INFO][5065] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" HandleID="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.737 [INFO][5065] ipam_plugin.go 264: Auto assigning IP ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" HandleID="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-210", "pod":"coredns-5dd5756b68-j5lgb", "timestamp":"2024-06-25 18:38:13.710472911 +0000 UTC"}, Hostname:"ip-172-31-29-210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.737 [INFO][5065] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.738 [INFO][5065] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.738 [INFO][5065] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-210' Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.741 [INFO][5065] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.749 [INFO][5065] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.758 [INFO][5065] ipam.go 489: Trying affinity for 192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.763 [INFO][5065] ipam.go 155: Attempting to load block cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.768 [INFO][5065] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.768 [INFO][5065] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.0/26 handle="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.773 [INFO][5065] ipam.go 1685: Creating new handle: k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.784 [INFO][5065] ipam.go 1203: Writing block in order to claim IPs block=192.168.68.0/26 handle="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.852 [INFO][5065] ipam.go 1216: Successfully claimed IPs: [192.168.68.4/26] block=192.168.68.0/26 handle="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.852 [INFO][5065] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.4/26] handle="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" host="ip-172-31-29-210" Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.852 [INFO][5065] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.978617 containerd[1971]: 2024-06-25 18:38:13.852 [INFO][5065] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.68.4/26] IPv6=[] ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" HandleID="k8s-pod-network.111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.982777 containerd[1971]: 2024-06-25 18:38:13.862 [INFO][5050] k8s.go 386: Populated endpoint ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d552fb93-79f7-4773-a1f5-9c9523f4d422", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"", Pod:"coredns-5dd5756b68-j5lgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2065e08e24a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.982777 containerd[1971]: 2024-06-25 18:38:13.862 [INFO][5050] k8s.go 387: Calico CNI using IPs: [192.168.68.4/32] ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.982777 containerd[1971]: 2024-06-25 18:38:13.862 [INFO][5050] dataplane_linux.go 68: Setting the host side veth name to cali2065e08e24a ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.982777 containerd[1971]: 2024-06-25 18:38:13.880 [INFO][5050] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.982777 containerd[1971]: 2024-06-25 18:38:13.881 [INFO][5050] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d552fb93-79f7-4773-a1f5-9c9523f4d422", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b", Pod:"coredns-5dd5756b68-j5lgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2065e08e24a", MAC:"46:d2:02:2a:76:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.983299 containerd[1971]: 2024-06-25 18:38:13.968 [INFO][5050] k8s.go 500: Wrote updated endpoint to datastore ContainerID="111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b" Namespace="kube-system" Pod="coredns-5dd5756b68-j5lgb" WorkloadEndpoint="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:13.996801 systemd-networkd[1806]: vxlan.calico: Link UP Jun 25 18:38:13.996811 systemd-networkd[1806]: vxlan.calico: Gained carrier Jun 25 18:38:14.219073 containerd[1971]: time="2024-06-25T18:38:14.204517010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:14.219073 containerd[1971]: time="2024-06-25T18:38:14.205348706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:14.219073 containerd[1971]: time="2024-06-25T18:38:14.205455005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:14.219073 containerd[1971]: time="2024-06-25T18:38:14.213055742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:14.286917 systemd-networkd[1806]: cali6f5b78d0501: Gained IPv6LL Jun 25 18:38:14.292070 systemd[1]: Started cri-containerd-111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b.scope - libcontainer container 111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b. Jun 25 18:38:14.433710 containerd[1971]: time="2024-06-25T18:38:14.433614672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-j5lgb,Uid:d552fb93-79f7-4773-a1f5-9c9523f4d422,Namespace:kube-system,Attempt:1,} returns sandbox id \"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b\"" Jun 25 18:38:14.444300 containerd[1971]: time="2024-06-25T18:38:14.443361738Z" level=info msg="CreateContainer within sandbox \"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:38:14.505046 containerd[1971]: time="2024-06-25T18:38:14.504928018Z" level=info msg="CreateContainer within sandbox \"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7\"" Jun 25 18:38:14.508575 containerd[1971]: time="2024-06-25T18:38:14.507859615Z" level=info msg="StartContainer for \"c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7\"" Jun 25 18:38:14.512196 kubelet[3440]: I0625 18:38:14.511615 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lfsz2" podStartSLOduration=41.511558936 podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:38:14.429510991 +0000 UTC m=+51.924614751" watchObservedRunningTime="2024-06-25 18:38:14.511558936 +0000 UTC m=+52.006662699" Jun 25 18:38:14.654385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960889055.mount: Deactivated successfully. Jun 25 18:38:14.685860 systemd[1]: run-containerd-runc-k8s.io-c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7-runc.QRYvhD.mount: Deactivated successfully. Jun 25 18:38:14.705182 systemd[1]: Started cri-containerd-c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7.scope - libcontainer container c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7. Jun 25 18:38:15.120391 systemd[1]: Started sshd@7-172.31.29.210:22-139.178.68.195:54870.service - OpenSSH per-connection server daemon (139.178.68.195:54870). Jun 25 18:38:15.141845 containerd[1971]: time="2024-06-25T18:38:15.138750305Z" level=info msg="StartContainer for \"c0a543b1bffcc369fc016181909903cf2a861022f31c193587de69cb1441c2c7\" returns successfully" Jun 25 18:38:15.567664 sshd[5219]: Accepted publickey for core from 139.178.68.195 port 54870 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:15.569589 systemd-networkd[1806]: cali2065e08e24a: Gained IPv6LL Jun 25 18:38:15.578011 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:15.611674 systemd-logind[1946]: New session 8 of user core. Jun 25 18:38:15.619904 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:38:15.639583 systemd-networkd[1806]: vxlan.calico: Gained IPv6LL Jun 25 18:38:15.959439 kubelet[3440]: I0625 18:38:15.959322 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-j5lgb" podStartSLOduration=42.959260605 podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:38:15.842892196 +0000 UTC m=+53.337995955" watchObservedRunningTime="2024-06-25 18:38:15.959260605 +0000 UTC m=+53.454364405" Jun 25 18:38:16.631204 sshd[5219]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:16.647134 systemd[1]: sshd@7-172.31.29.210:22-139.178.68.195:54870.service: Deactivated successfully. Jun 25 18:38:16.653435 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:38:16.656841 systemd-logind[1946]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:38:16.660902 systemd-logind[1946]: Removed session 8. Jun 25 18:38:18.238589 ntpd[1941]: Listen normally on 7 vxlan.calico 192.168.68.0:123 Jun 25 18:38:18.238701 ntpd[1941]: Listen normally on 8 caliaefbdae7683 [fe80::ecee:eeff:feee:eeee%4]:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 7 vxlan.calico 192.168.68.0:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 8 caliaefbdae7683 [fe80::ecee:eeff:feee:eeee%4]:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 9 cali729d3804f60 [fe80::ecee:eeff:feee:eeee%5]:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 10 cali6f5b78d0501 [fe80::ecee:eeff:feee:eeee%6]:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 11 cali2065e08e24a [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:38:18.302100 ntpd[1941]: 25 Jun 18:38:18 ntpd[1941]: Listen normally on 12 vxlan.calico [fe80::64a7:deff:fe74:b157%8]:123 Jun 25 18:38:18.238760 ntpd[1941]: Listen normally on 9 cali729d3804f60 [fe80::ecee:eeff:feee:eeee%5]:123 Jun 25 18:38:18.238801 ntpd[1941]: Listen normally on 10 cali6f5b78d0501 [fe80::ecee:eeff:feee:eeee%6]:123 Jun 25 18:38:18.238842 ntpd[1941]: Listen normally on 11 cali2065e08e24a [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:38:18.238882 ntpd[1941]: Listen normally on 12 vxlan.calico [fe80::64a7:deff:fe74:b157%8]:123 Jun 25 18:38:18.390903 containerd[1971]: time="2024-06-25T18:38:18.390032693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:18.400704 containerd[1971]: time="2024-06-25T18:38:18.400080240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:38:18.403770 containerd[1971]: time="2024-06-25T18:38:18.403700839Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:18.413087 containerd[1971]: time="2024-06-25T18:38:18.412890150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:18.415887 containerd[1971]: time="2024-06-25T18:38:18.415775000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 5.924683019s" Jun 25 18:38:18.415887 containerd[1971]: time="2024-06-25T18:38:18.415828852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:38:18.424520 containerd[1971]: time="2024-06-25T18:38:18.424476424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:38:18.455618 containerd[1971]: time="2024-06-25T18:38:18.455573277Z" level=info msg="CreateContainer within sandbox \"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:38:18.685435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799606021.mount: Deactivated successfully. Jun 25 18:38:19.026194 containerd[1971]: time="2024-06-25T18:38:19.026029968Z" level=info msg="CreateContainer within sandbox \"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a\"" Jun 25 18:38:19.027874 containerd[1971]: time="2024-06-25T18:38:19.027799615Z" level=info msg="StartContainer for \"068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a\"" Jun 25 18:38:19.102946 systemd[1]: Started cri-containerd-068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a.scope - libcontainer container 068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a. Jun 25 18:38:19.198090 containerd[1971]: time="2024-06-25T18:38:19.198048004Z" level=info msg="StartContainer for \"068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a\" returns successfully" Jun 25 18:38:19.625617 kubelet[3440]: I0625 18:38:19.625487 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f7ddf6898-lw9bj" podStartSLOduration=33.696046074 podCreationTimestamp="2024-06-25 18:37:40 +0000 UTC" firstStartedPulling="2024-06-25 18:38:12.487319171 +0000 UTC m=+49.982422923" lastFinishedPulling="2024-06-25 18:38:18.416678966 +0000 UTC m=+55.911782729" observedRunningTime="2024-06-25 18:38:19.623435406 +0000 UTC m=+57.118539162" watchObservedRunningTime="2024-06-25 18:38:19.62540588 +0000 UTC m=+57.120509638" Jun 25 18:38:20.422291 containerd[1971]: time="2024-06-25T18:38:20.420198971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:20.426207 containerd[1971]: time="2024-06-25T18:38:20.426117778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:38:20.428796 containerd[1971]: time="2024-06-25T18:38:20.428270304Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:20.445939 containerd[1971]: time="2024-06-25T18:38:20.445852193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:20.449100 containerd[1971]: time="2024-06-25T18:38:20.448952905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.024232674s" Jun 25 18:38:20.449100 containerd[1971]: time="2024-06-25T18:38:20.449004253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:38:20.454129 containerd[1971]: time="2024-06-25T18:38:20.454081632Z" level=info msg="CreateContainer within sandbox \"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:38:20.510860 containerd[1971]: time="2024-06-25T18:38:20.510807126Z" level=info msg="CreateContainer within sandbox \"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b\"" Jun 25 18:38:20.511835 containerd[1971]: time="2024-06-25T18:38:20.511786355Z" level=info msg="StartContainer for \"de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b\"" Jun 25 18:38:20.663908 systemd[1]: run-containerd-runc-k8s.io-de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b-runc.m0i6K2.mount: Deactivated successfully. Jun 25 18:38:20.680108 systemd[1]: Started cri-containerd-de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b.scope - libcontainer container de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b. Jun 25 18:38:20.738503 containerd[1971]: time="2024-06-25T18:38:20.737987532Z" level=info msg="StartContainer for \"de9a77d8a3503e50826e57a2b63e95c396a81f7990967d4b4f79c4d9c683d50b\" returns successfully" Jun 25 18:38:20.741650 containerd[1971]: time="2024-06-25T18:38:20.740856743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:38:21.665997 systemd[1]: Started sshd@8-172.31.29.210:22-139.178.68.195:51612.service - OpenSSH per-connection server daemon (139.178.68.195:51612). Jun 25 18:38:21.941011 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 51612 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:21.946680 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:22.013197 systemd-logind[1946]: New session 9 of user core. Jun 25 18:38:22.031040 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:38:22.623558 containerd[1971]: time="2024-06-25T18:38:22.622931655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.627778 containerd[1971]: time="2024-06-25T18:38:22.627246532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:38:22.638763 containerd[1971]: time="2024-06-25T18:38:22.635716990Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.679545 containerd[1971]: time="2024-06-25T18:38:22.679338659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.938426605s" Jun 25 18:38:22.679545 containerd[1971]: time="2024-06-25T18:38:22.679394370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:38:22.680328 containerd[1971]: time="2024-06-25T18:38:22.679877713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.683395 containerd[1971]: time="2024-06-25T18:38:22.683358615Z" level=info msg="CreateContainer within sandbox \"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:38:22.707053 sshd[5350]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:22.723552 systemd[1]: sshd@8-172.31.29.210:22-139.178.68.195:51612.service: Deactivated successfully. Jun 25 18:38:22.726782 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:38:22.730048 systemd-logind[1946]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:38:22.733531 containerd[1971]: time="2024-06-25T18:38:22.733403586Z" level=info msg="CreateContainer within sandbox \"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2771a5df90b39fce9574596114bfa3898a632911da94d9c479aa8f19a9fa27a0\"" Jun 25 18:38:22.733520 systemd-logind[1946]: Removed session 9. Jun 25 18:38:22.734734 containerd[1971]: time="2024-06-25T18:38:22.734698522Z" level=info msg="StartContainer for \"2771a5df90b39fce9574596114bfa3898a632911da94d9c479aa8f19a9fa27a0\"" Jun 25 18:38:22.867274 containerd[1971]: time="2024-06-25T18:38:22.867230721Z" level=info msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" Jun 25 18:38:22.934410 systemd[1]: Started cri-containerd-2771a5df90b39fce9574596114bfa3898a632911da94d9c479aa8f19a9fa27a0.scope - libcontainer container 2771a5df90b39fce9574596114bfa3898a632911da94d9c479aa8f19a9fa27a0. Jun 25 18:38:23.013311 containerd[1971]: time="2024-06-25T18:38:23.013070825Z" level=info msg="StartContainer for \"2771a5df90b39fce9574596114bfa3898a632911da94d9c479aa8f19a9fa27a0\" returns successfully" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:22.996 [WARNING][5394] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53634da2-c3fe-455c-8218-c4b393d92a3f", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e", Pod:"csi-node-driver-4cm5c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6f5b78d0501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:22.998 [INFO][5394] k8s.go 608: Cleaning up netns ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:22.998 [INFO][5394] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" iface="eth0" netns="" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:22.998 [INFO][5394] k8s.go 615: Releasing IP address(es) ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:22.998 [INFO][5394] utils.go 188: Calico CNI releasing IP address ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.049 [INFO][5414] ipam_plugin.go 411: Releasing address using handleID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.049 [INFO][5414] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.049 [INFO][5414] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.060 [WARNING][5414] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.060 [INFO][5414] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.063 [INFO][5414] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.071209 containerd[1971]: 2024-06-25 18:38:23.065 [INFO][5394] k8s.go 621: Teardown processing complete. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.072716 containerd[1971]: time="2024-06-25T18:38:23.071630710Z" level=info msg="TearDown network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" successfully" Jun 25 18:38:23.072716 containerd[1971]: time="2024-06-25T18:38:23.071689859Z" level=info msg="StopPodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" returns successfully" Jun 25 18:38:23.074983 containerd[1971]: time="2024-06-25T18:38:23.074907771Z" level=info msg="RemovePodSandbox for \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" Jun 25 18:38:23.078285 containerd[1971]: time="2024-06-25T18:38:23.077981996Z" level=info msg="Forcibly stopping sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\"" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.167 [WARNING][5443] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53634da2-c3fe-455c-8218-c4b393d92a3f", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"fa299d15e24a907e2899d56827c0a087f7f6c09725ab9100888354bc1be80d2e", Pod:"csi-node-driver-4cm5c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6f5b78d0501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.167 [INFO][5443] k8s.go 608: Cleaning up netns ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.168 [INFO][5443] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" iface="eth0" netns="" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.168 [INFO][5443] k8s.go 615: Releasing IP address(es) ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.168 [INFO][5443] utils.go 188: Calico CNI releasing IP address ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.224 [INFO][5450] ipam_plugin.go 411: Releasing address using handleID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.225 [INFO][5450] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.225 [INFO][5450] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.233 [WARNING][5450] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.233 [INFO][5450] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" HandleID="k8s-pod-network.cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Workload="ip--172--31--29--210-k8s-csi--node--driver--4cm5c-eth0" Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.235 [INFO][5450] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.243925 containerd[1971]: 2024-06-25 18:38:23.238 [INFO][5443] k8s.go 621: Teardown processing complete. ContainerID="cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234" Jun 25 18:38:23.243925 containerd[1971]: time="2024-06-25T18:38:23.242763692Z" level=info msg="TearDown network for sandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" successfully" Jun 25 18:38:23.249046 kubelet[3440]: I0625 18:38:23.249009 3440 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:38:23.249418 kubelet[3440]: I0625 18:38:23.249070 3440 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:38:23.261642 containerd[1971]: time="2024-06-25T18:38:23.260843756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:23.261642 containerd[1971]: time="2024-06-25T18:38:23.260961054Z" level=info msg="RemovePodSandbox \"cd1cbce4732b4712597f66244c435a7bf1851fe5760810f126f628a2db854234\" returns successfully" Jun 25 18:38:23.262782 containerd[1971]: time="2024-06-25T18:38:23.262086802Z" level=info msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.323 [WARNING][5469] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"efacc6a4-8734-4d68-9427-e531fbc08015", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2", Pod:"coredns-5dd5756b68-lfsz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729d3804f60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.323 [INFO][5469] k8s.go 608: Cleaning up netns ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.324 [INFO][5469] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" iface="eth0" netns="" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.324 [INFO][5469] k8s.go 615: Releasing IP address(es) ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.324 [INFO][5469] utils.go 188: Calico CNI releasing IP address ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.366 [INFO][5476] ipam_plugin.go 411: Releasing address using handleID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.367 [INFO][5476] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.367 [INFO][5476] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.374 [WARNING][5476] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.374 [INFO][5476] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.376 [INFO][5476] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.379858 containerd[1971]: 2024-06-25 18:38:23.378 [INFO][5469] k8s.go 621: Teardown processing complete. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.380564 containerd[1971]: time="2024-06-25T18:38:23.380527410Z" level=info msg="TearDown network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" successfully" Jun 25 18:38:23.380564 containerd[1971]: time="2024-06-25T18:38:23.380561065Z" level=info msg="StopPodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" returns successfully" Jun 25 18:38:23.381380 containerd[1971]: time="2024-06-25T18:38:23.381342602Z" level=info msg="RemovePodSandbox for \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" Jun 25 18:38:23.381488 containerd[1971]: time="2024-06-25T18:38:23.381381351Z" level=info msg="Forcibly stopping sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\"" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.464 [WARNING][5495] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"efacc6a4-8734-4d68-9427-e531fbc08015", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"866f4dd72483dd0fa1ffda1c9658148a0b56fe350c1b804b52e514320a9575d2", Pod:"coredns-5dd5756b68-lfsz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali729d3804f60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.465 [INFO][5495] k8s.go 608: Cleaning up netns ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.465 [INFO][5495] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" iface="eth0" netns="" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.465 [INFO][5495] k8s.go 615: Releasing IP address(es) ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.465 [INFO][5495] utils.go 188: Calico CNI releasing IP address ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.505 [INFO][5501] ipam_plugin.go 411: Releasing address using handleID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.505 [INFO][5501] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.505 [INFO][5501] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.513 [WARNING][5501] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.513 [INFO][5501] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" HandleID="k8s-pod-network.ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--lfsz2-eth0" Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.515 [INFO][5501] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.522746 containerd[1971]: 2024-06-25 18:38:23.519 [INFO][5495] k8s.go 621: Teardown processing complete. ContainerID="ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3" Jun 25 18:38:23.524939 containerd[1971]: time="2024-06-25T18:38:23.522987190Z" level=info msg="TearDown network for sandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" successfully" Jun 25 18:38:23.553738 containerd[1971]: time="2024-06-25T18:38:23.548999885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:23.553738 containerd[1971]: time="2024-06-25T18:38:23.549094158Z" level=info msg="RemovePodSandbox \"ca195f151811590b92acc27b14cfd95066f3c096acee86ebe7e183eab75084e3\" returns successfully" Jun 25 18:38:23.556313 containerd[1971]: time="2024-06-25T18:38:23.556053490Z" level=info msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.696 [WARNING][5519] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0", GenerateName:"calico-kube-controllers-f7ddf6898-", Namespace:"calico-system", SelfLink:"", UID:"4f07a82d-4c15-4612-8dd9-058a38c3b4c8", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7ddf6898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161", Pod:"calico-kube-controllers-f7ddf6898-lw9bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaefbdae7683", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.697 [INFO][5519] k8s.go 608: Cleaning up netns ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.697 [INFO][5519] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" iface="eth0" netns="" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.698 [INFO][5519] k8s.go 615: Releasing IP address(es) ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.698 [INFO][5519] utils.go 188: Calico CNI releasing IP address ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.734 [INFO][5529] ipam_plugin.go 411: Releasing address using handleID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.734 [INFO][5529] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.734 [INFO][5529] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.745 [WARNING][5529] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.745 [INFO][5529] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.748 [INFO][5529] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.752833 containerd[1971]: 2024-06-25 18:38:23.749 [INFO][5519] k8s.go 621: Teardown processing complete. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.754579 containerd[1971]: time="2024-06-25T18:38:23.752885053Z" level=info msg="TearDown network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" successfully" Jun 25 18:38:23.754579 containerd[1971]: time="2024-06-25T18:38:23.752915113Z" level=info msg="StopPodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" returns successfully" Jun 25 18:38:23.754579 containerd[1971]: time="2024-06-25T18:38:23.753838461Z" level=info msg="RemovePodSandbox for \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" Jun 25 18:38:23.754579 containerd[1971]: time="2024-06-25T18:38:23.753872169Z" level=info msg="Forcibly stopping sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\"" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.828 [WARNING][5547] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0", GenerateName:"calico-kube-controllers-f7ddf6898-", Namespace:"calico-system", SelfLink:"", UID:"4f07a82d-4c15-4612-8dd9-058a38c3b4c8", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7ddf6898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"b36def17ace7f60af8a5f848a31d4425885deda4b00fec71f5670415b1e4f161", Pod:"calico-kube-controllers-f7ddf6898-lw9bj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaefbdae7683", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.828 [INFO][5547] k8s.go 608: Cleaning up netns ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.828 [INFO][5547] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" iface="eth0" netns="" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.828 [INFO][5547] k8s.go 615: Releasing IP address(es) ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.828 [INFO][5547] utils.go 188: Calico CNI releasing IP address ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.874 [INFO][5554] ipam_plugin.go 411: Releasing address using handleID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.875 [INFO][5554] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.875 [INFO][5554] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.882 [WARNING][5554] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.882 [INFO][5554] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" HandleID="k8s-pod-network.de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Workload="ip--172--31--29--210-k8s-calico--kube--controllers--f7ddf6898--lw9bj-eth0" Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.885 [INFO][5554] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:23.909420 containerd[1971]: 2024-06-25 18:38:23.894 [INFO][5547] k8s.go 621: Teardown processing complete. ContainerID="de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796" Jun 25 18:38:23.915839 containerd[1971]: time="2024-06-25T18:38:23.914923580Z" level=info msg="TearDown network for sandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" successfully" Jun 25 18:38:23.925874 containerd[1971]: time="2024-06-25T18:38:23.925790781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:23.926141 containerd[1971]: time="2024-06-25T18:38:23.925881421Z" level=info msg="RemovePodSandbox \"de44afdaac13194a5fbbe2eb7057fe86b1f73567dfafab98d810cfcce7905796\" returns successfully" Jun 25 18:38:23.926995 containerd[1971]: time="2024-06-25T18:38:23.926954908Z" level=info msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.026 [WARNING][5577] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d552fb93-79f7-4773-a1f5-9c9523f4d422", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b", Pod:"coredns-5dd5756b68-j5lgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2065e08e24a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.027 [INFO][5577] k8s.go 608: Cleaning up netns ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.027 [INFO][5577] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" iface="eth0" netns="" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.027 [INFO][5577] k8s.go 615: Releasing IP address(es) ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.027 [INFO][5577] utils.go 188: Calico CNI releasing IP address ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.111 [INFO][5583] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.111 [INFO][5583] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.111 [INFO][5583] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.120 [WARNING][5583] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.121 [INFO][5583] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.123 [INFO][5583] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:24.132135 containerd[1971]: 2024-06-25 18:38:24.128 [INFO][5577] k8s.go 621: Teardown processing complete. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.132135 containerd[1971]: time="2024-06-25T18:38:24.132013594Z" level=info msg="TearDown network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" successfully" Jun 25 18:38:24.132135 containerd[1971]: time="2024-06-25T18:38:24.132035351Z" level=info msg="StopPodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" returns successfully" Jun 25 18:38:24.133915 containerd[1971]: time="2024-06-25T18:38:24.133849055Z" level=info msg="RemovePodSandbox for \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" Jun 25 18:38:24.135503 containerd[1971]: time="2024-06-25T18:38:24.133923822Z" level=info msg="Forcibly stopping sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\"" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.217 [WARNING][5601] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d552fb93-79f7-4773-a1f5-9c9523f4d422", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"111a4399af2a11351eb956b8c995a74e1651d8ae254d050744102760b6bd4a1b", Pod:"coredns-5dd5756b68-j5lgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2065e08e24a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.218 [INFO][5601] k8s.go 608: Cleaning up netns ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.218 [INFO][5601] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" iface="eth0" netns="" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.218 [INFO][5601] k8s.go 615: Releasing IP address(es) ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.218 [INFO][5601] utils.go 188: Calico CNI releasing IP address ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.287 [INFO][5608] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.288 [INFO][5608] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.288 [INFO][5608] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.319 [WARNING][5608] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.319 [INFO][5608] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" HandleID="k8s-pod-network.3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Workload="ip--172--31--29--210-k8s-coredns--5dd5756b68--j5lgb-eth0" Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.326 [INFO][5608] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:24.336419 containerd[1971]: 2024-06-25 18:38:24.333 [INFO][5601] k8s.go 621: Teardown processing complete. ContainerID="3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086" Jun 25 18:38:24.337253 containerd[1971]: time="2024-06-25T18:38:24.336485890Z" level=info msg="TearDown network for sandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" successfully" Jun 25 18:38:24.343815 containerd[1971]: time="2024-06-25T18:38:24.343607195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:24.343815 containerd[1971]: time="2024-06-25T18:38:24.343712850Z" level=info msg="RemovePodSandbox \"3d18f3e55c5e71773d024cdfb06a25f492a6935ba9d953ccf73b92b8876d8086\" returns successfully" Jun 25 18:38:27.740438 systemd[1]: Started sshd@9-172.31.29.210:22-139.178.68.195:51616.service - OpenSSH per-connection server daemon (139.178.68.195:51616). Jun 25 18:38:27.975082 sshd[5623]: Accepted publickey for core from 139.178.68.195 port 51616 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:27.975855 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:27.996885 systemd-logind[1946]: New session 10 of user core. Jun 25 18:38:28.002259 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:38:28.431920 sshd[5623]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:28.437961 systemd-logind[1946]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:38:28.440046 systemd[1]: sshd@9-172.31.29.210:22-139.178.68.195:51616.service: Deactivated successfully. Jun 25 18:38:28.444943 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:38:28.446538 systemd-logind[1946]: Removed session 10. Jun 25 18:38:28.624188 systemd[1]: run-containerd-runc-k8s.io-068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a-runc.DYgYAX.mount: Deactivated successfully. Jun 25 18:38:33.480535 systemd[1]: Started sshd@10-172.31.29.210:22-139.178.68.195:58156.service - OpenSSH per-connection server daemon (139.178.68.195:58156). Jun 25 18:38:33.671071 sshd[5656]: Accepted publickey for core from 139.178.68.195 port 58156 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:33.673131 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:33.692750 systemd-logind[1946]: New session 11 of user core. Jun 25 18:38:33.698126 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:38:33.973277 sshd[5656]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:33.985037 systemd[1]: sshd@10-172.31.29.210:22-139.178.68.195:58156.service: Deactivated successfully. Jun 25 18:38:33.989179 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:38:33.990764 systemd-logind[1946]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:38:33.993509 systemd-logind[1946]: Removed session 11. Jun 25 18:38:34.019265 systemd[1]: Started sshd@11-172.31.29.210:22-139.178.68.195:58170.service - OpenSSH per-connection server daemon (139.178.68.195:58170). Jun 25 18:38:34.250762 sshd[5671]: Accepted publickey for core from 139.178.68.195 port 58170 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:34.251423 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:34.257594 systemd-logind[1946]: New session 12 of user core. Jun 25 18:38:34.269883 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:38:35.148797 sshd[5671]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:35.152979 systemd-logind[1946]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:38:35.153844 systemd[1]: sshd@11-172.31.29.210:22-139.178.68.195:58170.service: Deactivated successfully. Jun 25 18:38:35.156650 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:38:35.158383 systemd-logind[1946]: Removed session 12. Jun 25 18:38:35.198302 systemd[1]: Started sshd@12-172.31.29.210:22-139.178.68.195:58180.service - OpenSSH per-connection server daemon (139.178.68.195:58180). Jun 25 18:38:35.407328 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 58180 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:35.414101 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:35.430733 systemd-logind[1946]: New session 13 of user core. Jun 25 18:38:35.436714 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:38:35.656835 kubelet[3440]: I0625 18:38:35.656787 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4cm5c" podStartSLOduration=46.091801989 podCreationTimestamp="2024-06-25 18:37:40 +0000 UTC" firstStartedPulling="2024-06-25 18:38:13.114780988 +0000 UTC m=+50.609884736" lastFinishedPulling="2024-06-25 18:38:22.679709732 +0000 UTC m=+60.174813493" observedRunningTime="2024-06-25 18:38:23.68627565 +0000 UTC m=+61.181379402" watchObservedRunningTime="2024-06-25 18:38:35.656730746 +0000 UTC m=+73.151834505" Jun 25 18:38:35.778032 sshd[5697]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:35.797743 systemd[1]: sshd@12-172.31.29.210:22-139.178.68.195:58180.service: Deactivated successfully. Jun 25 18:38:35.806098 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:38:35.808282 systemd-logind[1946]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:38:35.811615 systemd-logind[1946]: Removed session 13. Jun 25 18:38:40.818126 systemd[1]: Started sshd@13-172.31.29.210:22-139.178.68.195:55684.service - OpenSSH per-connection server daemon (139.178.68.195:55684). Jun 25 18:38:41.000702 sshd[5740]: Accepted publickey for core from 139.178.68.195 port 55684 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:41.002111 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:41.009533 systemd-logind[1946]: New session 14 of user core. Jun 25 18:38:41.018041 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:38:41.311887 sshd[5740]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:41.324678 systemd[1]: sshd@13-172.31.29.210:22-139.178.68.195:55684.service: Deactivated successfully. Jun 25 18:38:41.329261 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:38:41.331118 systemd-logind[1946]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:38:41.332551 systemd-logind[1946]: Removed session 14. Jun 25 18:38:46.353996 systemd[1]: Started sshd@14-172.31.29.210:22-139.178.68.195:55690.service - OpenSSH per-connection server daemon (139.178.68.195:55690). Jun 25 18:38:46.595677 sshd[5758]: Accepted publickey for core from 139.178.68.195 port 55690 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:46.600890 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:46.612316 systemd-logind[1946]: New session 15 of user core. Jun 25 18:38:46.618878 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:38:47.010077 sshd[5758]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:47.017303 systemd-logind[1946]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:38:47.019171 systemd[1]: sshd@14-172.31.29.210:22-139.178.68.195:55690.service: Deactivated successfully. Jun 25 18:38:47.024559 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:38:47.028836 systemd-logind[1946]: Removed session 15. Jun 25 18:38:52.058287 systemd[1]: Started sshd@15-172.31.29.210:22-139.178.68.195:32984.service - OpenSSH per-connection server daemon (139.178.68.195:32984). Jun 25 18:38:52.256712 sshd[5771]: Accepted publickey for core from 139.178.68.195 port 32984 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:52.260395 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:52.268748 systemd-logind[1946]: New session 16 of user core. Jun 25 18:38:52.275980 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:38:52.657046 sshd[5771]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:52.663909 systemd[1]: sshd@15-172.31.29.210:22-139.178.68.195:32984.service: Deactivated successfully. Jun 25 18:38:52.666583 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:38:52.669268 systemd-logind[1946]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:38:52.670894 systemd-logind[1946]: Removed session 16. Jun 25 18:38:57.706877 systemd[1]: Started sshd@16-172.31.29.210:22-139.178.68.195:32998.service - OpenSSH per-connection server daemon (139.178.68.195:32998). Jun 25 18:38:57.903754 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 32998 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:57.906690 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:57.914397 systemd-logind[1946]: New session 17 of user core. Jun 25 18:38:57.923709 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:38:58.260153 sshd[5815]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:58.278683 systemd[1]: sshd@16-172.31.29.210:22-139.178.68.195:32998.service: Deactivated successfully. Jun 25 18:38:58.287298 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:38:58.288421 systemd-logind[1946]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:38:58.290574 systemd-logind[1946]: Removed session 17. Jun 25 18:39:00.478423 kubelet[3440]: I0625 18:39:00.478320 3440 topology_manager.go:215] "Topology Admit Handler" podUID="a1158865-88fa-4d04-97f3-82d1b860a575" podNamespace="calico-apiserver" podName="calico-apiserver-6bbfbd84d-fdrl5" Jun 25 18:39:00.519239 systemd[1]: Created slice kubepods-besteffort-poda1158865_88fa_4d04_97f3_82d1b860a575.slice - libcontainer container kubepods-besteffort-poda1158865_88fa_4d04_97f3_82d1b860a575.slice. Jun 25 18:39:00.550443 kubelet[3440]: I0625 18:39:00.550359 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d689b\" (UniqueName: \"kubernetes.io/projected/a1158865-88fa-4d04-97f3-82d1b860a575-kube-api-access-d689b\") pod \"calico-apiserver-6bbfbd84d-fdrl5\" (UID: \"a1158865-88fa-4d04-97f3-82d1b860a575\") " pod="calico-apiserver/calico-apiserver-6bbfbd84d-fdrl5" Jun 25 18:39:00.550443 kubelet[3440]: I0625 18:39:00.550421 3440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1158865-88fa-4d04-97f3-82d1b860a575-calico-apiserver-certs\") pod \"calico-apiserver-6bbfbd84d-fdrl5\" (UID: \"a1158865-88fa-4d04-97f3-82d1b860a575\") " pod="calico-apiserver/calico-apiserver-6bbfbd84d-fdrl5" Jun 25 18:39:00.662672 kubelet[3440]: E0625 18:39:00.651538 3440 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:39:00.712418 kubelet[3440]: E0625 18:39:00.712354 3440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1158865-88fa-4d04-97f3-82d1b860a575-calico-apiserver-certs podName:a1158865-88fa-4d04-97f3-82d1b860a575 nodeName:}" failed. No retries permitted until 2024-06-25 18:39:01.176811386 +0000 UTC m=+98.671915166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a1158865-88fa-4d04-97f3-82d1b860a575-calico-apiserver-certs") pod "calico-apiserver-6bbfbd84d-fdrl5" (UID: "a1158865-88fa-4d04-97f3-82d1b860a575") : secret "calico-apiserver-certs" not found Jun 25 18:39:01.427812 containerd[1971]: time="2024-06-25T18:39:01.426894232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbfbd84d-fdrl5,Uid:a1158865-88fa-4d04-97f3-82d1b860a575,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:39:02.014011 (udev-worker)[5870]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:39:02.029481 systemd-networkd[1806]: calic2866711659: Link UP Jun 25 18:39:02.045875 systemd-networkd[1806]: calic2866711659: Gained carrier Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.708 [INFO][5853] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0 calico-apiserver-6bbfbd84d- calico-apiserver a1158865-88fa-4d04-97f3-82d1b860a575 1064 0 2024-06-25 18:39:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bbfbd84d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-210 calico-apiserver-6bbfbd84d-fdrl5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2866711659 [] []}} ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.711 [INFO][5853] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.836 [INFO][5863] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" HandleID="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Workload="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.861 [INFO][5863] ipam_plugin.go 264: Auto assigning IP ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" HandleID="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Workload="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-210", "pod":"calico-apiserver-6bbfbd84d-fdrl5", "timestamp":"2024-06-25 18:39:01.836523825 +0000 UTC"}, Hostname:"ip-172-31-29-210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.862 [INFO][5863] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.863 [INFO][5863] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.863 [INFO][5863] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-210' Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.866 [INFO][5863] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.872 [INFO][5863] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.891 [INFO][5863] ipam.go 489: Trying affinity for 192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.900 [INFO][5863] ipam.go 155: Attempting to load block cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.905 [INFO][5863] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.0/26 host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.905 [INFO][5863] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.0/26 handle="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.909 [INFO][5863] ipam.go 1685: Creating new handle: k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015 Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.915 [INFO][5863] ipam.go 1203: Writing block in order to claim IPs block=192.168.68.0/26 handle="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.951 [INFO][5863] ipam.go 1216: Successfully claimed IPs: [192.168.68.5/26] block=192.168.68.0/26 handle="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.956 [INFO][5863] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.5/26] handle="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" host="ip-172-31-29-210" Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.957 [INFO][5863] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:39:02.141089 containerd[1971]: 2024-06-25 18:39:01.959 [INFO][5863] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.68.5/26] IPv6=[] ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" HandleID="k8s-pod-network.695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Workload="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:01.972 [INFO][5853] k8s.go 386: Populated endpoint ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0", GenerateName:"calico-apiserver-6bbfbd84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1158865-88fa-4d04-97f3-82d1b860a575", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbfbd84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"", Pod:"calico-apiserver-6bbfbd84d-fdrl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2866711659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:01.974 [INFO][5853] k8s.go 387: Calico CNI using IPs: [192.168.68.5/32] ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:01.974 [INFO][5853] dataplane_linux.go 68: Setting the host side veth name to calic2866711659 ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:02.059 [INFO][5853] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:02.071 [INFO][5853] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0", GenerateName:"calico-apiserver-6bbfbd84d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1158865-88fa-4d04-97f3-82d1b860a575", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbfbd84d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-210", ContainerID:"695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015", Pod:"calico-apiserver-6bbfbd84d-fdrl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2866711659", MAC:"aa:15:77:3f:d4:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:39:02.143367 containerd[1971]: 2024-06-25 18:39:02.130 [INFO][5853] k8s.go 500: Wrote updated endpoint to datastore ContainerID="695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015" Namespace="calico-apiserver" Pod="calico-apiserver-6bbfbd84d-fdrl5" WorkloadEndpoint="ip--172--31--29--210-k8s-calico--apiserver--6bbfbd84d--fdrl5-eth0" Jun 25 18:39:02.263224 containerd[1971]: time="2024-06-25T18:39:02.262833829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:39:02.264885 containerd[1971]: time="2024-06-25T18:39:02.263006783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:39:02.264885 containerd[1971]: time="2024-06-25T18:39:02.263050829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:39:02.264885 containerd[1971]: time="2024-06-25T18:39:02.263073314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:39:02.335722 systemd[1]: run-containerd-runc-k8s.io-695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015-runc.qdewre.mount: Deactivated successfully. Jun 25 18:39:02.352964 systemd[1]: Started cri-containerd-695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015.scope - libcontainer container 695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015. Jun 25 18:39:02.527318 containerd[1971]: time="2024-06-25T18:39:02.527179645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbfbd84d-fdrl5,Uid:a1158865-88fa-4d04-97f3-82d1b860a575,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015\"" Jun 25 18:39:02.531023 containerd[1971]: time="2024-06-25T18:39:02.530977213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:39:03.308584 systemd[1]: Started sshd@17-172.31.29.210:22-139.178.68.195:38818.service - OpenSSH per-connection server daemon (139.178.68.195:38818). Jun 25 18:39:03.544700 sshd[5927]: Accepted publickey for core from 139.178.68.195 port 38818 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:03.549425 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:03.558754 systemd-logind[1946]: New session 18 of user core. Jun 25 18:39:03.568958 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:39:03.696954 systemd-networkd[1806]: calic2866711659: Gained IPv6LL Jun 25 18:39:04.928730 sshd[5927]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:04.958554 systemd[1]: sshd@17-172.31.29.210:22-139.178.68.195:38818.service: Deactivated successfully. Jun 25 18:39:04.966284 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:39:04.974011 systemd-logind[1946]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:39:04.988272 systemd[1]: Started sshd@18-172.31.29.210:22-139.178.68.195:38826.service - OpenSSH per-connection server daemon (139.178.68.195:38826). Jun 25 18:39:04.991843 systemd-logind[1946]: Removed session 18. Jun 25 18:39:05.215522 sshd[5947]: Accepted publickey for core from 139.178.68.195 port 38826 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:05.216821 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:05.226581 systemd-logind[1946]: New session 19 of user core. Jun 25 18:39:05.232134 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:39:05.563300 systemd[1]: run-containerd-runc-k8s.io-6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423-runc.0pIKmA.mount: Deactivated successfully. Jun 25 18:39:06.193189 sshd[5947]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:06.207849 systemd[1]: sshd@18-172.31.29.210:22-139.178.68.195:38826.service: Deactivated successfully. Jun 25 18:39:06.215329 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:39:06.218927 systemd-logind[1946]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:39:06.239404 ntpd[1941]: Listen normally on 13 calic2866711659 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:39:06.252025 ntpd[1941]: 25 Jun 18:39:06 ntpd[1941]: Listen normally on 13 calic2866711659 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:39:06.248938 systemd[1]: Started sshd@19-172.31.29.210:22-139.178.68.195:38842.service - OpenSSH per-connection server daemon (139.178.68.195:38842). Jun 25 18:39:06.271052 systemd-logind[1946]: Removed session 19. Jun 25 18:39:06.508877 sshd[5996]: Accepted publickey for core from 139.178.68.195 port 38842 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:06.514006 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:06.523156 systemd-logind[1946]: New session 20 of user core. Jun 25 18:39:06.529922 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:39:07.046642 containerd[1971]: time="2024-06-25T18:39:07.046591776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:39:07.049280 containerd[1971]: time="2024-06-25T18:39:07.048835556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:39:07.052071 containerd[1971]: time="2024-06-25T18:39:07.051825413Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:39:07.057716 containerd[1971]: time="2024-06-25T18:39:07.057405986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:39:07.059261 containerd[1971]: time="2024-06-25T18:39:07.058542307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.527511296s" Jun 25 18:39:07.059261 containerd[1971]: time="2024-06-25T18:39:07.058591881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:39:07.073881 containerd[1971]: time="2024-06-25T18:39:07.073840491Z" level=info msg="CreateContainer within sandbox \"695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:39:07.096071 containerd[1971]: time="2024-06-25T18:39:07.096019843Z" level=info msg="CreateContainer within sandbox \"695e0c6ba14e62a1e39bc1f846c7fe0213ccb452d3353fb59496f88199522015\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8\"" Jun 25 18:39:07.096748 containerd[1971]: time="2024-06-25T18:39:07.096719044Z" level=info msg="StartContainer for \"8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8\"" Jun 25 18:39:07.223961 systemd[1]: run-containerd-runc-k8s.io-8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8-runc.sRPMX0.mount: Deactivated successfully. Jun 25 18:39:07.239098 systemd[1]: Started cri-containerd-8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8.scope - libcontainer container 8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8. Jun 25 18:39:07.500612 containerd[1971]: time="2024-06-25T18:39:07.499564622Z" level=info msg="StartContainer for \"8d23c58a2e97d4773ce88c27930d5f519bd8420ba2272d52e3fdc57845312ae8\" returns successfully" Jun 25 18:39:07.940780 kubelet[3440]: I0625 18:39:07.940740 3440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bbfbd84d-fdrl5" podStartSLOduration=3.410232912 podCreationTimestamp="2024-06-25 18:39:00 +0000 UTC" firstStartedPulling="2024-06-25 18:39:02.529470815 +0000 UTC m=+100.024574560" lastFinishedPulling="2024-06-25 18:39:07.059904806 +0000 UTC m=+104.555008547" observedRunningTime="2024-06-25 18:39:07.939468581 +0000 UTC m=+105.434572361" watchObservedRunningTime="2024-06-25 18:39:07.940666899 +0000 UTC m=+105.435770658" Jun 25 18:39:08.583083 sshd[5996]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:08.591489 systemd[1]: sshd@19-172.31.29.210:22-139.178.68.195:38842.service: Deactivated successfully. Jun 25 18:39:08.595954 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:39:08.607919 systemd-logind[1946]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:39:08.637160 systemd[1]: Started sshd@20-172.31.29.210:22-139.178.68.195:46390.service - OpenSSH per-connection server daemon (139.178.68.195:46390). Jun 25 18:39:08.647909 systemd-logind[1946]: Removed session 20. Jun 25 18:39:08.860845 sshd[6062]: Accepted publickey for core from 139.178.68.195 port 46390 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:08.863203 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:08.873258 systemd-logind[1946]: New session 21 of user core. Jun 25 18:39:08.895898 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:39:10.507864 sshd[6062]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:10.514704 systemd[1]: sshd@20-172.31.29.210:22-139.178.68.195:46390.service: Deactivated successfully. Jun 25 18:39:10.520482 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:39:10.522443 systemd-logind[1946]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:39:10.541176 systemd[1]: Started sshd@21-172.31.29.210:22-139.178.68.195:46404.service - OpenSSH per-connection server daemon (139.178.68.195:46404). Jun 25 18:39:10.544355 systemd-logind[1946]: Removed session 21. Jun 25 18:39:10.765577 sshd[6075]: Accepted publickey for core from 139.178.68.195 port 46404 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:10.773457 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:10.784329 systemd-logind[1946]: New session 22 of user core. Jun 25 18:39:10.788931 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:39:11.105460 sshd[6075]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:11.113699 systemd[1]: sshd@21-172.31.29.210:22-139.178.68.195:46404.service: Deactivated successfully. Jun 25 18:39:11.120405 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:39:11.125291 systemd-logind[1946]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:39:11.127174 systemd-logind[1946]: Removed session 22. Jun 25 18:39:16.144111 systemd[1]: Started sshd@22-172.31.29.210:22-139.178.68.195:46418.service - OpenSSH per-connection server daemon (139.178.68.195:46418). Jun 25 18:39:16.334960 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 46418 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:16.335853 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:16.356133 systemd-logind[1946]: New session 23 of user core. Jun 25 18:39:16.367501 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:39:16.590844 sshd[6091]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:16.594293 systemd[1]: sshd@22-172.31.29.210:22-139.178.68.195:46418.service: Deactivated successfully. Jun 25 18:39:16.598360 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:39:16.600875 systemd-logind[1946]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:39:16.602038 systemd-logind[1946]: Removed session 23. Jun 25 18:39:21.645157 systemd[1]: Started sshd@23-172.31.29.210:22-139.178.68.195:56692.service - OpenSSH per-connection server daemon (139.178.68.195:56692). Jun 25 18:39:21.834351 sshd[6112]: Accepted publickey for core from 139.178.68.195 port 56692 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:21.839014 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:21.854191 systemd-logind[1946]: New session 24 of user core. Jun 25 18:39:21.861898 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:39:22.141361 sshd[6112]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:22.171707 systemd[1]: sshd@23-172.31.29.210:22-139.178.68.195:56692.service: Deactivated successfully. Jun 25 18:39:22.184288 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:39:22.187300 systemd-logind[1946]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:39:22.190160 systemd-logind[1946]: Removed session 24. Jun 25 18:39:27.180033 systemd[1]: Started sshd@24-172.31.29.210:22-139.178.68.195:56708.service - OpenSSH per-connection server daemon (139.178.68.195:56708). Jun 25 18:39:27.397307 sshd[6131]: Accepted publickey for core from 139.178.68.195 port 56708 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:27.398136 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:27.409787 systemd-logind[1946]: New session 25 of user core. Jun 25 18:39:27.422018 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:39:27.673039 sshd[6131]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:27.678966 systemd-logind[1946]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:39:27.680019 systemd[1]: sshd@24-172.31.29.210:22-139.178.68.195:56708.service: Deactivated successfully. Jun 25 18:39:27.682731 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:39:27.684450 systemd-logind[1946]: Removed session 25. Jun 25 18:39:28.604349 systemd[1]: run-containerd-runc-k8s.io-068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a-runc.jsWOxF.mount: Deactivated successfully. Jun 25 18:39:32.728388 systemd[1]: Started sshd@25-172.31.29.210:22-139.178.68.195:57906.service - OpenSSH per-connection server daemon (139.178.68.195:57906). Jun 25 18:39:32.924653 sshd[6168]: Accepted publickey for core from 139.178.68.195 port 57906 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:32.925258 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:32.932413 systemd-logind[1946]: New session 26 of user core. Jun 25 18:39:32.938903 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:39:33.233411 sshd[6168]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:33.239427 systemd-logind[1946]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:39:33.240332 systemd[1]: sshd@25-172.31.29.210:22-139.178.68.195:57906.service: Deactivated successfully. Jun 25 18:39:33.247535 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:39:33.251252 systemd-logind[1946]: Removed session 26. Jun 25 18:39:35.538110 systemd[1]: run-containerd-runc-k8s.io-6f125eda90ed27f0cbedf91a25a161590aff02c518f027eeb0d812ff5a773423-runc.Mdnyx7.mount: Deactivated successfully. Jun 25 18:39:38.294200 systemd[1]: Started sshd@26-172.31.29.210:22-139.178.68.195:33032.service - OpenSSH per-connection server daemon (139.178.68.195:33032). Jun 25 18:39:38.547653 sshd[6216]: Accepted publickey for core from 139.178.68.195 port 33032 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:38.559859 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:38.582152 systemd-logind[1946]: New session 27 of user core. Jun 25 18:39:38.587884 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:39:38.936503 sshd[6216]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:38.946012 systemd-logind[1946]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:39:38.947384 systemd[1]: sshd@26-172.31.29.210:22-139.178.68.195:33032.service: Deactivated successfully. Jun 25 18:39:38.950203 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:39:38.951924 systemd-logind[1946]: Removed session 27. Jun 25 18:39:43.980055 systemd[1]: Started sshd@27-172.31.29.210:22-139.178.68.195:33040.service - OpenSSH per-connection server daemon (139.178.68.195:33040). Jun 25 18:39:44.159082 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 33040 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:44.161910 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:44.173975 systemd-logind[1946]: New session 28 of user core. Jun 25 18:39:44.181310 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:39:44.458947 sshd[6231]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:44.473983 systemd[1]: sshd@27-172.31.29.210:22-139.178.68.195:33040.service: Deactivated successfully. Jun 25 18:39:44.478821 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:39:44.482162 systemd-logind[1946]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:39:44.485574 systemd-logind[1946]: Removed session 28. Jun 25 18:39:49.498375 systemd[1]: Started sshd@28-172.31.29.210:22-139.178.68.195:32940.service - OpenSSH per-connection server daemon (139.178.68.195:32940). Jun 25 18:39:49.718964 sshd[6260]: Accepted publickey for core from 139.178.68.195 port 32940 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:49.720530 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:49.727298 systemd-logind[1946]: New session 29 of user core. Jun 25 18:39:49.729873 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:39:50.012045 sshd[6260]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:50.019271 systemd[1]: sshd@28-172.31.29.210:22-139.178.68.195:32940.service: Deactivated successfully. Jun 25 18:39:50.019341 systemd-logind[1946]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:39:50.023059 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:39:50.024764 systemd-logind[1946]: Removed session 29. Jun 25 18:39:58.587620 systemd[1]: run-containerd-runc-k8s.io-068a99c16d4249129bd0b7acb93f1b8a1f774dee68aadba5865f46c453fdb71a-runc.q0HZdr.mount: Deactivated successfully. Jun 25 18:40:04.582934 systemd[1]: cri-containerd-695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc.scope: Deactivated successfully. Jun 25 18:40:04.583270 systemd[1]: cri-containerd-695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc.scope: Consumed 6.704s CPU time. Jun 25 18:40:04.671870 systemd[1]: cri-containerd-61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9.scope: Deactivated successfully. Jun 25 18:40:04.672582 systemd[1]: cri-containerd-61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9.scope: Consumed 3.852s CPU time, 26.5M memory peak, 0B memory swap peak. Jun 25 18:40:04.695956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc-rootfs.mount: Deactivated successfully. Jun 25 18:40:04.736770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9-rootfs.mount: Deactivated successfully. Jun 25 18:40:04.743888 containerd[1971]: time="2024-06-25T18:40:04.696225172Z" level=info msg="shim disconnected" id=695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc namespace=k8s.io Jun 25 18:40:04.745305 containerd[1971]: time="2024-06-25T18:40:04.743949121Z" level=warning msg="cleaning up after shim disconnected" id=695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc namespace=k8s.io Jun 25 18:40:04.745305 containerd[1971]: time="2024-06-25T18:40:04.725787609Z" level=info msg="shim disconnected" id=61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9 namespace=k8s.io Jun 25 18:40:04.745305 containerd[1971]: time="2024-06-25T18:40:04.744036201Z" level=warning msg="cleaning up after shim disconnected" id=61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9 namespace=k8s.io Jun 25 18:40:04.745305 containerd[1971]: time="2024-06-25T18:40:04.744047472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:04.745887 containerd[1971]: time="2024-06-25T18:40:04.745355489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:04.772765 containerd[1971]: time="2024-06-25T18:40:04.772709194Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:40:05.231498 kubelet[3440]: I0625 18:40:05.231141 3440 scope.go:117] "RemoveContainer" containerID="695ff3e403c2b3d413073a7094e5b22ee611eb95713c289e0bab76fd361f1fcc" Jun 25 18:40:05.235314 kubelet[3440]: I0625 18:40:05.235274 3440 scope.go:117] "RemoveContainer" containerID="61eb53002fb23ff818dfdfbff302c7387b3bfd9e6617b8a5bd4f85a6986505c9" Jun 25 18:40:05.266490 containerd[1971]: time="2024-06-25T18:40:05.266270640Z" level=info msg="CreateContainer within sandbox \"9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:40:05.268780 containerd[1971]: time="2024-06-25T18:40:05.268317729Z" level=info msg="CreateContainer within sandbox \"820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:40:05.311279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558180859.mount: Deactivated successfully. Jun 25 18:40:05.335981 containerd[1971]: time="2024-06-25T18:40:05.335928100Z" level=info msg="CreateContainer within sandbox \"9cdb17ea7555e1fc00bca55994e457a3e4f0b89c9dbfe836950321b1cf1f13dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cbae1e28d8fd8c89707b507e3a9326dd1315c9f69e653e9a69dc969734152a3d\"" Jun 25 18:40:05.336873 containerd[1971]: time="2024-06-25T18:40:05.336838957Z" level=info msg="StartContainer for \"cbae1e28d8fd8c89707b507e3a9326dd1315c9f69e653e9a69dc969734152a3d\"" Jun 25 18:40:05.342131 containerd[1971]: time="2024-06-25T18:40:05.342076543Z" level=info msg="CreateContainer within sandbox \"820454c9d08417a8a716bbd411307fe6fbddaa299cbe3beaa214d56b42c6ae8b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c38d8c52fe94db529e2f242985cf22ea54f10fa364b0bbe7f6ee466407855ef8\"" Jun 25 18:40:05.343064 containerd[1971]: time="2024-06-25T18:40:05.343033256Z" level=info msg="StartContainer for \"c38d8c52fe94db529e2f242985cf22ea54f10fa364b0bbe7f6ee466407855ef8\"" Jun 25 18:40:05.434944 systemd[1]: Started cri-containerd-c38d8c52fe94db529e2f242985cf22ea54f10fa364b0bbe7f6ee466407855ef8.scope - libcontainer container c38d8c52fe94db529e2f242985cf22ea54f10fa364b0bbe7f6ee466407855ef8. Jun 25 18:40:05.437562 systemd[1]: Started cri-containerd-cbae1e28d8fd8c89707b507e3a9326dd1315c9f69e653e9a69dc969734152a3d.scope - libcontainer container cbae1e28d8fd8c89707b507e3a9326dd1315c9f69e653e9a69dc969734152a3d. Jun 25 18:40:05.677254 containerd[1971]: time="2024-06-25T18:40:05.677081438Z" level=info msg="StartContainer for \"c38d8c52fe94db529e2f242985cf22ea54f10fa364b0bbe7f6ee466407855ef8\" returns successfully" Jun 25 18:40:05.679452 containerd[1971]: time="2024-06-25T18:40:05.678981444Z" level=info msg="StartContainer for \"cbae1e28d8fd8c89707b507e3a9326dd1315c9f69e653e9a69dc969734152a3d\" returns successfully" Jun 25 18:40:06.564413 kubelet[3440]: E0625 18:40:06.563122 3440 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:40:09.050456 systemd[1]: cri-containerd-387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841.scope: Deactivated successfully. Jun 25 18:40:09.050986 systemd[1]: cri-containerd-387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841.scope: Consumed 1.811s CPU time, 13.4M memory peak, 0B memory swap peak. Jun 25 18:40:09.114614 containerd[1971]: time="2024-06-25T18:40:09.113877869Z" level=info msg="shim disconnected" id=387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841 namespace=k8s.io Jun 25 18:40:09.114614 containerd[1971]: time="2024-06-25T18:40:09.113953254Z" level=warning msg="cleaning up after shim disconnected" id=387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841 namespace=k8s.io Jun 25 18:40:09.114614 containerd[1971]: time="2024-06-25T18:40:09.113971282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:09.117430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841-rootfs.mount: Deactivated successfully. Jun 25 18:40:09.266699 kubelet[3440]: I0625 18:40:09.261734 3440 scope.go:117] "RemoveContainer" containerID="387ec4c2786c43a3581851bd80159d5a65c37326f63328dfc89c95e7e1701841" Jun 25 18:40:09.274576 containerd[1971]: time="2024-06-25T18:40:09.274535092Z" level=info msg="CreateContainer within sandbox \"7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:40:09.355746 containerd[1971]: time="2024-06-25T18:40:09.354951037Z" level=info msg="CreateContainer within sandbox \"7cd763947d580c2bd1bd560fd1a55591a71c94527efb6a023bec39e3dba2e543\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1170796867dde205c0f48f314c2ca3bc9a0bf5c7da87e760d7598f6e03edaf04\"" Jun 25 18:40:09.356335 containerd[1971]: time="2024-06-25T18:40:09.356302748Z" level=info msg="StartContainer for \"1170796867dde205c0f48f314c2ca3bc9a0bf5c7da87e760d7598f6e03edaf04\"" Jun 25 18:40:09.442894 systemd[1]: Started cri-containerd-1170796867dde205c0f48f314c2ca3bc9a0bf5c7da87e760d7598f6e03edaf04.scope - libcontainer container 1170796867dde205c0f48f314c2ca3bc9a0bf5c7da87e760d7598f6e03edaf04. Jun 25 18:40:09.515372 containerd[1971]: time="2024-06-25T18:40:09.515325442Z" level=info msg="StartContainer for \"1170796867dde205c0f48f314c2ca3bc9a0bf5c7da87e760d7598f6e03edaf04\" returns successfully" Jun 25 18:40:16.563582 kubelet[3440]: E0625 18:40:16.563457 3440 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"